Test Report: Docker_Cloud_Shell 19616

                    
                      ead8b21730629246ae204938704f78710656bdeb:2024-09-12:36186
                    
                

Test fail (7/108)

x
+
TestAddons/serial/Volcano (204.52s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 389.442456ms
addons_test.go:905: volcano-admission stabilized in 412.60401ms
addons_test.go:897: volcano-scheduler stabilized in 430.654714ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-x2rvr" [d06233f8-443f-4609-882a-0318b2aea2f1] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.005486331s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-86lvv" [5bd79bfd-17fd-4922-a1dc-2770f73c9271] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005868174s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-2xcbp" [dbc985c4-21bc-491b-ae0f-444a93cacb8b] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.006531636s
addons_test.go:932: (dbg) Run:  kubectl --context addons-331995 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-331995 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-331995 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [b3abdb59-9e79-449a-94bb-0cb11ab8519e] Pending
helpers_test.go:344: "test-job-nginx-0" [b3abdb59-9e79-449a-94bb-0cb11ab8519e] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-331995 -n addons-331995
addons_test.go:964: (dbg) Done: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-331995 -n addons-331995: (1.044541764s)
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-12 21:53:31.03528914 +0000 UTC m=+462.277755524
addons_test.go:964: (dbg) Run:  kubectl --context addons-331995 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-331995 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-fcc64ed3-6c4f-4caf-8743-2ca6d2d75a6f
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7xf4v (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-7xf4v:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age   From     Message
----     ------            ----  ----     -------
Warning  FailedScheduling  3m    volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-331995 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-331995 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-331995
helpers_test.go:235: (dbg) docker inspect addons-331995:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "19a13011e6679d2c63d2a96db045a63005478ea2bb59e4bb58ee3bc2b2c1ce1e",
	        "Created": "2024-09-12T21:46:36.531837396Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 70419,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-12T21:46:36.712670886Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1e046fff9d873d0625e7bcc757c3514a16d475711d13430b9690fa498decc3a8",
	        "ResolvConfPath": "/var/lib/docker/containers/19a13011e6679d2c63d2a96db045a63005478ea2bb59e4bb58ee3bc2b2c1ce1e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/19a13011e6679d2c63d2a96db045a63005478ea2bb59e4bb58ee3bc2b2c1ce1e/hostname",
	        "HostsPath": "/var/lib/docker/containers/19a13011e6679d2c63d2a96db045a63005478ea2bb59e4bb58ee3bc2b2c1ce1e/hosts",
	        "LogPath": "/var/lib/docker/containers/19a13011e6679d2c63d2a96db045a63005478ea2bb59e4bb58ee3bc2b2c1ce1e/19a13011e6679d2c63d2a96db045a63005478ea2bb59e4bb58ee3bc2b2c1ce1e-json.log",
	        "Name": "/addons-331995",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-331995:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-331995",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/61dd1faf9b906dd84b392c130a39fe08e6205e8c85a9a511120f47e26a6f4c51-init/diff:/var/lib/docker/overlay2/ffdf788bdf1d1cdb120030b71e5081c18b78a7cda19c1d5699c3f05321eeb2ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/61dd1faf9b906dd84b392c130a39fe08e6205e8c85a9a511120f47e26a6f4c51/merged",
	                "UpperDir": "/var/lib/docker/overlay2/61dd1faf9b906dd84b392c130a39fe08e6205e8c85a9a511120f47e26a6f4c51/diff",
	                "WorkDir": "/var/lib/docker/overlay2/61dd1faf9b906dd84b392c130a39fe08e6205e8c85a9a511120f47e26a6f4c51/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-331995",
	                "Source": "/var/lib/docker/volumes/addons-331995/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-331995",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-331995",
	                "name.minikube.sigs.k8s.io": "addons-331995",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0b4a752d18a6cbc55187cb58233f7a5aeaa2acef8b04f85ba70a4cd819fd59ae",
	            "SandboxKey": "/var/run/docker/netns/0b4a752d18a6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-331995": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "f1bea63ba7151dba9450c5ddc2a6e5ac868361c31512e383ce228a7ec5e8dc78",
	                    "EndpointID": "96789eb05de757d0cc4124481d4cc051b9af8faa180457cfbd87b3e8bbbc5cab",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-331995",
	                        "19a13011e667"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-331995 -n addons-331995
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-331995 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-331995 logs -n 25: (2.553610194s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|---------------|-----------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |    Profile    |         User          | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|---------------|-----------------------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p                  | addons-331995 | g528047478195_compute | v1.34.0 | 12 Sep 24 21:45 UTC |                     |
	|         | addons-331995                        |               |                       |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-331995 | g528047478195_compute | v1.34.0 | 12 Sep 24 21:45 UTC |                     |
	|         | addons-331995                        |               |                       |         |                     |                     |
	| start   | -p addons-331995 --wait=true         | addons-331995 | g528047478195_compute | v1.34.0 | 12 Sep 24 21:45 UTC | 12 Sep 24 21:50 UTC |
	|         | --memory=4000 --alsologtostderr      |               |                       |         |                     |                     |
	|         | --addons=registry                    |               |                       |         |                     |                     |
	|         | --addons=metrics-server              |               |                       |         |                     |                     |
	|         | --addons=volumesnapshots             |               |                       |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |               |                       |         |                     |                     |
	|         | --addons=gcp-auth                    |               |                       |         |                     |                     |
	|         | --addons=cloud-spanner               |               |                       |         |                     |                     |
	|         | --addons=inspektor-gadget            |               |                       |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |               |                       |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |               |                       |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |               |                       |         |                     |                     |
	|         | --driver=docker                      |               |                       |         |                     |                     |
	|         | --container-runtime=docker           |               |                       |         |                     |                     |
	|         | --addons=ingress                     |               |                       |         |                     |                     |
	|         | --addons=ingress-dns                 |               |                       |         |                     |                     |
	|         | --addons=helm-tiller                 |               |                       |         |                     |                     |
	|---------|--------------------------------------|---------------|-----------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 21:45:49
	Running on machine: cs-905301410258-default
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:45:49.078751   69940 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:45:49.078936   69940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:45:49.078947   69940 out.go:358] Setting ErrFile to fd 2...
	I0912 21:45:49.078957   69940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:45:49.079277   69940 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/bin
	W0912 21:45:49.079618   69940 root.go:314] Error reading config file at /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/config/config.json: open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/config/config.json: no such file or directory
	I0912 21:45:49.080289   69940 out.go:352] Setting JSON to false
	I0912 21:45:49.081284   69940 start.go:129] hostinfo: {"hostname":"cs-905301410258-default","uptime":2650,"bootTime":1726174899,"procs":20,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.1.100+","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"guest","hostId":"88b15d6b-fddc-40bb-b1ad-a90cb2566e38"}
	I0912 21:45:49.081362   69940 start.go:139] virtualization:  guest
	I0912 21:45:49.085982   69940 out.go:177] * [addons-331995] minikube v1.34.0 on Ubuntu 22.04 (amd64)
	W0912 21:45:49.089793   69940 preload.go:293] Failed to list preload files: open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/cache/preloaded-tarball: no such file or directory
	I0912 21:45:49.089851   69940 notify.go:220] Checking for updates...
	I0912 21:45:49.089936   69940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 21:45:49.093369   69940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:45:49.096802   69940 out.go:177]   - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19616-63719/kubeconfig
	I0912 21:45:49.100414   69940 out.go:177]   - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19616-63719/.minikube
	I0912 21:45:49.104961   69940 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 21:45:49.108291   69940 out.go:177]   - MINIKUBE_WANTUPDATENOTIFICATION=false
	I0912 21:45:49.112078   69940 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:45:49.157984   69940 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0912 21:45:49.158296   69940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:45:49.266446   69940 info.go:266] docker info: {ID:cc2c2805-45ae-4725-9955-34f6536c4026 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:false NGoroutines:55 SystemTime:2024-09-12 21:45:49.247377546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 21:45:49.266659   69940 docker.go:318] overlay module found
	I0912 21:45:49.270367   69940 out.go:177] * Using the docker driver based on user configuration
	I0912 21:45:49.273772   69940 start.go:297] selected driver: docker
	I0912 21:45:49.273834   69940 start.go:901] validating driver "docker" against <nil>
	I0912 21:45:49.273861   69940 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 21:45:49.274759   69940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:45:49.376710   69940 info.go:266] docker info: {ID:cc2c2805-45ae-4725-9955-34f6536c4026 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:false NGoroutines:55 SystemTime:2024-09-12 21:45:49.359923636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 21:45:49.376966   69940 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 21:45:49.377403   69940 start_flags.go:421] setting extra-config: kubelet.cgroups-per-qos=false
	I0912 21:45:49.377429   69940 start_flags.go:421] setting extra-config: kubelet.enforce-node-allocatable=""
	I0912 21:45:49.377487   69940 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:45:49.381228   69940 out.go:177] * Using Docker driver with root privileges
	I0912 21:45:49.385116   69940 cni.go:84] Creating CNI manager for ""
	I0912 21:45:49.385169   69940 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 21:45:49.385204   69940 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 21:45:49.385345   69940 start.go:340] cluster config:
	{Name:addons-331995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-331995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:45:49.388847   69940 out.go:177] * Starting "addons-331995" primary control-plane node in "addons-331995" cluster
	I0912 21:45:49.391841   69940 cache.go:121] Beginning downloading kic base image for docker with docker
	I0912 21:45:49.395280   69940 out.go:177] * Pulling base image v0.0.45-1726156396-19616 ...
	I0912 21:45:49.398316   69940 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 21:45:49.398465   69940 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local docker daemon
	I0912 21:45:49.425373   69940 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 to local cache
	I0912 21:45:49.425838   69940 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local cache directory
	I0912 21:45:49.425995   69940 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 to local cache
	I0912 21:45:49.428953   69940 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0912 21:45:49.428985   69940 cache.go:56] Caching tarball of preloaded images
	I0912 21:45:49.429477   69940 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 21:45:49.434439   69940 out.go:177] * Downloading Kubernetes v1.31.1 preload ...
	I0912 21:45:49.438083   69940 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0912 21:45:49.471182   69940 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0912 21:45:52.666899   69940 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0912 21:45:52.667173   69940 preload.go:254] verifying checksum of /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0912 21:45:54.062020   69940 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 21:45:54.062551   69940 profile.go:143] Saving config to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/config.json ...
	I0912 21:45:54.062607   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/config.json: {Name:mkb3372d6a177aebf5f7ec207cfe88817f7c5bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:45:57.961944   69940 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 as a tarball
	I0912 21:45:57.961968   69940 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 from local cache
	I0912 21:46:23.658165   69940 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 from cached tarball
	I0912 21:46:23.658218   69940 cache.go:194] Successfully downloaded all kic artifacts
	I0912 21:46:23.658296   69940 start.go:360] acquireMachinesLock for addons-331995: {Name:mk84494dea4fc95748971c805f99ee1b550f8b25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:46:23.658642   69940 start.go:364] duration metric: took 311.509µs to acquireMachinesLock for "addons-331995"
	I0912 21:46:23.658702   69940 start.go:93] Provisioning new machine with config: &{Name:addons-331995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-331995 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 21:46:23.658863   69940 start.go:125] createHost starting for "" (driver="docker")
	I0912 21:46:23.663377   69940 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0912 21:46:23.663825   69940 start.go:159] libmachine.API.Create for "addons-331995" (driver="docker")
	I0912 21:46:23.663871   69940 client.go:168] LocalClient.Create starting
	I0912 21:46:23.664031   69940 main.go:141] libmachine: Creating CA: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/ca.pem
	I0912 21:46:23.788484   69940 main.go:141] libmachine: Creating client certificate: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/cert.pem
	I0912 21:46:24.213669   69940 cli_runner.go:164] Run: docker network inspect addons-331995 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0912 21:46:24.239933   69940 cli_runner.go:211] docker network inspect addons-331995 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0912 21:46:24.240140   69940 network_create.go:284] running [docker network inspect addons-331995] to gather additional debugging logs...
	I0912 21:46:24.240277   69940 cli_runner.go:164] Run: docker network inspect addons-331995
	W0912 21:46:24.264508   69940 cli_runner.go:211] docker network inspect addons-331995 returned with exit code 1
	I0912 21:46:24.264627   69940 network_create.go:287] error running [docker network inspect addons-331995]: docker network inspect addons-331995: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-331995 not found
	I0912 21:46:24.264656   69940 network_create.go:289] output of [docker network inspect addons-331995]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-331995 not found
	
	** /stderr **
	I0912 21:46:24.264830   69940 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 21:46:24.292427   69940 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc016ce2980}
	I0912 21:46:24.292498   69940 network_create.go:124] attempt to create docker network addons-331995 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1460 ...
	I0912 21:46:24.292619   69940 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1460 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-331995 addons-331995
	I0912 21:46:24.398856   69940 network_create.go:108] docker network addons-331995 192.168.49.0/24 created
	I0912 21:46:24.398905   69940 kic.go:121] calculated static IP "192.168.49.2" for the "addons-331995" container
	I0912 21:46:24.399091   69940 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0912 21:46:24.424907   69940 cli_runner.go:164] Run: docker volume create addons-331995 --label name.minikube.sigs.k8s.io=addons-331995 --label created_by.minikube.sigs.k8s.io=true
	I0912 21:46:24.454817   69940 oci.go:103] Successfully created a docker volume addons-331995
	I0912 21:46:24.454997   69940 cli_runner.go:164] Run: docker run --rm --name addons-331995-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-331995 --entrypoint /usr/bin/test -v addons-331995:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -d /var/lib
	I0912 21:46:28.539745   69940 cli_runner.go:217] Completed: docker run --rm --name addons-331995-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-331995 --entrypoint /usr/bin/test -v addons-331995:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -d /var/lib: (4.0846871s)
	I0912 21:46:28.539790   69940 oci.go:107] Successfully prepared a docker volume addons-331995
	I0912 21:46:28.539820   69940 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 21:46:28.539853   69940 kic.go:194] Starting extracting preloaded images to volume ...
	I0912 21:46:28.539991   69940 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-331995:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -I lz4 -xf /preloaded.tar -C /extractDir
	I0912 21:46:36.409576   69940 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-331995:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -I lz4 -xf /preloaded.tar -C /extractDir: (7.869507519s)
	I0912 21:46:36.409624   69940 kic.go:203] duration metric: took 7.869767004s to extract preloaded images to volume ...
	W0912 21:46:36.409752   69940 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0912 21:46:36.409820   69940 oci.go:243] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0912 21:46:36.409914   69940 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0912 21:46:36.504599   69940 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-331995 --name addons-331995 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-331995 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-331995 --network addons-331995 --ip 192.168.49.2 --volume addons-331995:/var --security-opt apparmor=unconfined --memory=4000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889
	I0912 21:46:36.938910   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Running}}
	I0912 21:46:36.985522   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:46:37.032879   69940 cli_runner.go:164] Run: docker exec addons-331995 stat /var/lib/dpkg/alternatives/iptables
	I0912 21:46:37.150453   69940 oci.go:144] the created container "addons-331995" has a running status.
	I0912 21:46:37.150493   69940 kic.go:225] Creating ssh key for kic: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa...
	I0912 21:46:37.682434   69940 kic_runner.go:191] docker (temp): /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0912 21:46:37.774300   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:46:37.846181   69940 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0912 21:46:37.846234   69940 kic_runner.go:114] Args: [docker exec --privileged addons-331995 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0912 21:46:38.047377   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:46:38.100752   69940 machine.go:93] provisionDockerMachine start ...
	I0912 21:46:38.100937   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:38.149661   69940 main.go:141] libmachine: Using SSH client type: native
	I0912 21:46:38.150009   69940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0912 21:46:38.150027   69940 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 21:46:38.355282   69940 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-331995
	
	I0912 21:46:38.355313   69940 ubuntu.go:169] provisioning hostname "addons-331995"
	I0912 21:46:38.355434   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:38.393288   69940 main.go:141] libmachine: Using SSH client type: native
	I0912 21:46:38.393672   69940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0912 21:46:38.393703   69940 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-331995 && echo "addons-331995" | sudo tee /etc/hostname
	I0912 21:46:38.590768   69940 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-331995
	
	I0912 21:46:38.590910   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:38.633281   69940 main.go:141] libmachine: Using SSH client type: native
	I0912 21:46:38.633628   69940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0912 21:46:38.633661   69940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-331995' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-331995/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-331995' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 21:46:38.786104   69940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:46:38.786140   69940 ubuntu.go:175] set auth options {CertDir:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube CaCertPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/ca.pem CaPrivateKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/server.pem ServerKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/server-key.pem ClientKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube}
	I0912 21:46:38.786174   69940 ubuntu.go:177] setting up certificates
	I0912 21:46:38.786192   69940 provision.go:84] configureAuth start
	I0912 21:46:38.786332   69940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-331995
	I0912 21:46:38.824555   69940 provision.go:143] copyHostCerts
	I0912 21:46:38.824693   69940 exec_runner.go:151] cp: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/ca.pem --> /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/ca.pem (1119 bytes)
	I0912 21:46:38.824922   69940 exec_runner.go:151] cp: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/cert.pem --> /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/cert.pem (1164 bytes)
	I0912 21:46:38.825127   69940 exec_runner.go:151] cp: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/key.pem --> /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/key.pem (1679 bytes)
	I0912 21:46:38.825261   69940 provision.go:117] generating server cert: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/server.pem ca-key=/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/ca.pem private-key=/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/ca-key.pem org=g528047478195_compute.addons-331995 san=[127.0.0.1 192.168.49.2 addons-331995 localhost minikube]
	I0912 21:46:38.900500   69940 provision.go:177] copyRemoteCerts
	I0912 21:46:38.900622   69940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 21:46:38.900707   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:38.927613   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:46:39.029653   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 21:46:39.071478   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1119 bytes)
	I0912 21:46:39.117968   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0912 21:46:39.157603   69940 provision.go:87] duration metric: took 371.391849ms to configureAuth
	I0912 21:46:39.157709   69940 ubuntu.go:193] setting minikube options for container-runtime
	I0912 21:46:39.158123   69940 config.go:182] Loaded profile config "addons-331995": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 21:46:39.158263   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:39.188586   69940 main.go:141] libmachine: Using SSH client type: native
	I0912 21:46:39.188900   69940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0912 21:46:39.188926   69940 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0912 21:46:39.327914   69940 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0912 21:46:39.328041   69940 ubuntu.go:71] root file system type: overlay
	I0912 21:46:39.328287   69940 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0912 21:46:39.328481   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:39.357575   69940 main.go:141] libmachine: Using SSH client type: native
	I0912 21:46:39.357899   69940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0912 21:46:39.358027   69940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0912 21:46:39.517145   69940 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0912 21:46:39.517307   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:39.547456   69940 main.go:141] libmachine: Using SSH client type: native
	I0912 21:46:39.547854   69940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0912 21:46:39.547892   69940 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0912 21:46:40.733033   69940 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:41.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-12 21:46:39.513858772 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0912 21:46:40.733173   69940 machine.go:96] duration metric: took 2.632387975s to provisionDockerMachine
	I0912 21:46:40.733193   69940 client.go:171] duration metric: took 17.069311471s to LocalClient.Create
	I0912 21:46:40.733219   69940 start.go:167] duration metric: took 17.069398477s to libmachine.API.Create "addons-331995"
	I0912 21:46:40.733235   69940 start.go:293] postStartSetup for "addons-331995" (driver="docker")
	I0912 21:46:40.733255   69940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 21:46:40.733385   69940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 21:46:40.733478   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:40.765655   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:46:40.869352   69940 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 21:46:40.875276   69940 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0912 21:46:40.875344   69940 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0912 21:46:40.875362   69940 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0912 21:46:40.875391   69940 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0912 21:46:40.875417   69940 filesync.go:126] Scanning /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/addons for local assets ...
	I0912 21:46:40.875520   69940 filesync.go:126] Scanning /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/files for local assets ...
	I0912 21:46:40.875569   69940 start.go:296] duration metric: took 142.323964ms for postStartSetup
	I0912 21:46:40.876187   69940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-331995
	I0912 21:46:40.904482   69940 profile.go:143] Saving config to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/config.json ...
	I0912 21:46:40.904998   69940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 21:46:40.905154   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:40.942457   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:46:41.037221   69940 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0912 21:46:41.045006   69940 start.go:128] duration metric: took 17.386113745s to createHost
	I0912 21:46:41.045089   69940 start.go:83] releasing machines lock for "addons-331995", held for 17.386422281s
	I0912 21:46:41.045318   69940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-331995
	I0912 21:46:41.073837   69940 ssh_runner.go:195] Run: cat /version.json
	I0912 21:46:41.073856   69940 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 21:46:41.073940   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:41.073971   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:41.119499   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:46:41.120558   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:46:41.327179   69940 ssh_runner.go:195] Run: systemctl --version
	I0912 21:46:41.335113   69940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 21:46:41.342795   69940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0912 21:46:41.386000   69940 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0912 21:46:41.386341   69940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 21:46:41.435621   69940 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 21:46:41.435690   69940 start.go:495] detecting cgroup driver to use...
	I0912 21:46:41.435737   69940 detect.go:190] detected "systemd" cgroup driver on host os
	I0912 21:46:41.436121   69940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:46:41.465963   69940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0912 21:46:41.483761   69940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 21:46:41.500944   69940 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0912 21:46:41.501163   69940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0912 21:46:41.519913   69940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 21:46:41.536657   69940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 21:46:41.554994   69940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 21:46:41.572285   69940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 21:46:41.588529   69940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 21:46:41.605783   69940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0912 21:46:41.622457   69940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0912 21:46:41.639495   69940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 21:46:41.654641   69940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 21:46:41.669714   69940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:46:41.810644   69940 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 21:46:42.035548   69940 start.go:495] detecting cgroup driver to use...
	I0912 21:46:42.035610   69940 detect.go:190] detected "systemd" cgroup driver on host os
	I0912 21:46:42.035699   69940 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0912 21:46:42.109199   69940 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0912 21:46:42.109308   69940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 21:46:42.151003   69940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:46:42.199177   69940 ssh_runner.go:195] Run: which cri-dockerd
	I0912 21:46:42.207435   69940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0912 21:46:42.229970   69940 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0912 21:46:42.279564   69940 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0912 21:46:42.512322   69940 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0912 21:46:42.750642   69940 docker.go:574] configuring docker to use "systemd" as cgroup driver...
	I0912 21:46:42.750833   69940 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0912 21:46:42.788701   69940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:46:42.925937   69940 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 21:46:43.405664   69940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0912 21:46:43.426929   69940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0912 21:46:43.446539   69940 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0912 21:46:43.588320   69940 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 21:46:43.729539   69940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:46:43.868746   69940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0912 21:46:43.898554   69940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0912 21:46:43.918693   69940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:46:44.059394   69940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0912 21:46:44.177994   69940 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0912 21:46:44.178155   69940 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0912 21:46:44.188096   69940 start.go:563] Will wait 60s for crictl version
	I0912 21:46:44.188210   69940 ssh_runner.go:195] Run: which crictl
	I0912 21:46:44.196142   69940 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 21:46:44.254781   69940 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0912 21:46:44.254907   69940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 21:46:44.299238   69940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 21:46:44.352803   69940 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0912 21:46:44.352984   69940 cli_runner.go:164] Run: docker network inspect addons-331995 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 21:46:44.380002   69940 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0912 21:46:44.386035   69940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:46:44.408949   69940 out.go:177]   - kubelet.cgroups-per-qos=false
	I0912 21:46:44.414572   69940 out.go:177]   - kubelet.enforce-node-allocatable=""
	I0912 21:46:44.422631   69940 kubeadm.go:883] updating cluster {Name:addons-331995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-331995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 21:46:44.422838   69940 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 21:46:44.422979   69940 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 21:46:44.456297   69940 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0912 21:46:44.456410   69940 docker.go:615] Images already preloaded, skipping extraction
	I0912 21:46:44.456605   69940 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 21:46:44.490743   69940 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0912 21:46:44.490798   69940 cache_images.go:84] Images are preloaded, skipping loading
	I0912 21:46:44.490814   69940 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0912 21:46:44.490961   69940 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable="" --hostname-override=addons-331995 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-331995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 21:46:44.491072   69940 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0912 21:46:44.573046   69940 cni.go:84] Creating CNI manager for ""
	I0912 21:46:44.573135   69940 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 21:46:44.573189   69940 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 21:46:44.573261   69940 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-331995 NodeName:addons-331995 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 21:46:44.573501   69940 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-331995"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 21:46:44.573658   69940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 21:46:44.589730   69940 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 21:46:44.589956   69940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 21:46:44.605714   69940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (366 bytes)
	I0912 21:46:44.638162   69940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 21:46:44.669472   69940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0912 21:46:44.699721   69940 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0912 21:46:44.705480   69940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:46:44.724840   69940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:46:44.863901   69940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:46:44.902594   69940 certs.go:68] Setting up /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995 for IP: 192.168.49.2
	I0912 21:46:44.902622   69940 certs.go:194] generating shared ca certs ...
	I0912 21:46:44.902649   69940 certs.go:226] acquiring lock for ca certs: {Name:mk07132fcad645396ad0113bfe1144f20ebd53cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:44.903019   69940 certs.go:240] generating "minikubeCA" ca cert: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/ca.key
	I0912 21:46:45.136562   69940 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/ca.crt ...
	I0912 21:46:45.136605   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/ca.crt: {Name:mkd77991a3935507c9e39e1e8c7352eb64a051a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:45.137041   69940 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/ca.key ...
	I0912 21:46:45.137107   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/ca.key: {Name:mkaded902d60e06576b03be4b279e2d32b5cf911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:45.137441   69940 certs.go:240] generating "proxyClientCA" ca cert: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/proxy-client-ca.key
	I0912 21:46:45.393857   69940 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/proxy-client-ca.crt ...
	I0912 21:46:45.393900   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/proxy-client-ca.crt: {Name:mk1c59db2db7bf45fc1d1c32c142700c002c11a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:45.394371   69940 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/proxy-client-ca.key ...
	I0912 21:46:45.394399   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/proxy-client-ca.key: {Name:mk282dce664465d1df24006e91d6f07a7df93911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:45.394718   69940 certs.go:256] generating profile certs ...
	I0912 21:46:45.394844   69940 certs.go:363] generating signed profile cert for "minikube-user": /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.key
	I0912 21:46:45.394887   69940 crypto.go:68] Generating cert /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt with IP's: []
	I0912 21:46:45.636662   69940 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt ...
	I0912 21:46:45.636707   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt: {Name:mke31e589f83a1c38a024aea52726e376c982342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:45.637192   69940 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.key ...
	I0912 21:46:45.637225   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.key: {Name:mk39b4fda5dce98acf00f584ee063c859bc97327 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:45.637568   69940 certs.go:363] generating signed profile cert for "minikube": /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.key.9ffb3548
	I0912 21:46:45.637628   69940 crypto.go:68] Generating cert /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.crt.9ffb3548 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0912 21:46:45.994706   69940 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.crt.9ffb3548 ...
	I0912 21:46:45.994751   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.crt.9ffb3548: {Name:mk1efb70c236fa62bb69f0dd1d330505bbd1c6d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:45.995215   69940 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.key.9ffb3548 ...
	I0912 21:46:45.995247   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.key.9ffb3548: {Name:mk609e719ed674e4e5f39cc7500611ef674a975b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:45.995565   69940 certs.go:381] copying /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.crt.9ffb3548 -> /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.crt
	I0912 21:46:45.995783   69940 certs.go:385] copying /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.key.9ffb3548 -> /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.key
	I0912 21:46:45.995900   69940 certs.go:363] generating signed profile cert for "aggregator": /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/proxy-client.key
	I0912 21:46:45.995957   69940 crypto.go:68] Generating cert /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/proxy-client.crt with IP's: []
	I0912 21:46:46.176876   69940 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/proxy-client.crt ...
	I0912 21:46:46.176919   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/proxy-client.crt: {Name:mked3e93f8e89d479655f0a83f0cf91acc0dff4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:46.177391   69940 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/proxy-client.key ...
	I0912 21:46:46.177422   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/proxy-client.key: {Name:mkf183ad3f3c9b739996fca014f9b7a3ab18fed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:46.177991   69940 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/ca-key.pem (1675 bytes)
	I0912 21:46:46.178091   69940 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/ca.pem (1119 bytes)
	I0912 21:46:46.178157   69940 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/cert.pem (1164 bytes)
	I0912 21:46:46.178236   69940 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/key.pem (1679 bytes)
	I0912 21:46:46.179148   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 21:46:46.220800   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 21:46:46.261620   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 21:46:46.302857   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0912 21:46:46.343368   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0912 21:46:46.384900   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 21:46:46.426413   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 21:46:46.468244   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 21:46:46.515336   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 21:46:46.575446   69940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 21:46:46.614664   69940 ssh_runner.go:195] Run: openssl version
	I0912 21:46:46.623677   69940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 21:46:46.640711   69940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:46:46.646985   69940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:46 /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:46:46.647198   69940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:46:46.658086   69940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 21:46:46.674601   69940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 21:46:46.680510   69940 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0912 21:46:46.680675   69940 kubeadm.go:392] StartCluster: {Name:addons-331995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-331995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:46:46.680914   69940 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0912 21:46:46.710471   69940 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 21:46:46.726383   69940 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 21:46:46.742178   69940 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0912 21:46:46.742353   69940 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 21:46:46.757766   69940 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 21:46:46.757791   69940 kubeadm.go:157] found existing configuration files:
	
	I0912 21:46:46.757889   69940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 21:46:46.773816   69940 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 21:46:46.773956   69940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 21:46:46.788964   69940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 21:46:46.804401   69940 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 21:46:46.804535   69940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 21:46:46.819081   69940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 21:46:46.834253   69940 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 21:46:46.834468   69940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 21:46:46.849366   69940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 21:46:46.865086   69940 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 21:46:46.865202   69940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 21:46:46.880459   69940 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0912 21:46:46.940128   69940 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0912 21:46:46.940282   69940 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 21:46:47.065304   69940 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 21:46:47.065529   69940 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 21:46:47.065805   69940 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0912 21:46:47.084539   69940 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 21:46:47.089717   69940 out.go:235]   - Generating certificates and keys ...
	I0912 21:46:47.089881   69940 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 21:46:47.090001   69940 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 21:46:47.283203   69940 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 21:46:47.425832   69940 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0912 21:46:47.774554   69940 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0912 21:46:47.971522   69940 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0912 21:46:48.125270   69940 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0912 21:46:48.125919   69940 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-331995 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0912 21:46:48.255292   69940 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0912 21:46:48.255825   69940 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-331995 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0912 21:46:48.576713   69940 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 21:46:48.665241   69940 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 21:46:48.912557   69940 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0912 21:46:48.912922   69940 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 21:46:49.070523   69940 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 21:46:49.161285   69940 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0912 21:46:49.288107   69940 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 21:46:49.390656   69940 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 21:46:49.527646   69940 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 21:46:49.535586   69940 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 21:46:49.535719   69940 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 21:46:49.539642   69940 out.go:235]   - Booting up control plane ...
	I0912 21:46:49.539823   69940 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 21:46:49.539961   69940 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 21:46:49.540105   69940 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 21:46:49.567872   69940 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 21:46:49.577736   69940 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 21:46:49.577843   69940 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 21:46:49.741755   69940 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0912 21:46:49.742025   69940 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0912 21:46:50.241387   69940 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.749206ms
	I0912 21:46:50.241586   69940 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0912 21:46:57.245270   69940 kubeadm.go:310] [api-check] The API server is healthy after 7.003793097s
	I0912 21:46:57.268751   69940 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 21:46:57.290537   69940 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 21:46:57.359707   69940 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 21:46:57.360167   69940 kubeadm.go:310] [mark-control-plane] Marking the node addons-331995 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 21:46:57.420897   69940 kubeadm.go:310] [bootstrap-token] Using token: eviub1.04vr1snjz6iiyfq2
	I0912 21:46:57.425241   69940 out.go:235]   - Configuring RBAC rules ...
	I0912 21:46:57.425709   69940 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 21:46:57.453711   69940 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 21:46:57.487364   69940 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 21:46:57.494407   69940 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 21:46:57.504521   69940 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 21:46:57.512811   69940 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 21:46:57.655762   69940 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 21:46:58.275732   69940 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 21:46:58.659405   69940 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 21:46:58.661475   69940 kubeadm.go:310] 
	I0912 21:46:58.661624   69940 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 21:46:58.661636   69940 kubeadm.go:310] 
	I0912 21:46:58.661900   69940 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 21:46:58.661918   69940 kubeadm.go:310] 
	I0912 21:46:58.661994   69940 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 21:46:58.662152   69940 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 21:46:58.662263   69940 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 21:46:58.662274   69940 kubeadm.go:310] 
	I0912 21:46:58.662384   69940 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 21:46:58.662394   69940 kubeadm.go:310] 
	I0912 21:46:58.662495   69940 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 21:46:58.662505   69940 kubeadm.go:310] 
	I0912 21:46:58.662613   69940 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 21:46:58.662772   69940 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 21:46:58.662929   69940 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 21:46:58.662939   69940 kubeadm.go:310] 
	I0912 21:46:58.663258   69940 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 21:46:58.663432   69940 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 21:46:58.663444   69940 kubeadm.go:310] 
	I0912 21:46:58.663620   69940 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token eviub1.04vr1snjz6iiyfq2 \
	I0912 21:46:58.663851   69940 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:40c3b7faca2e9b71afa48ddc2040f3d4018e7f54d7a41f332b5ec5aea93a2e14 \
	I0912 21:46:58.664317   69940 kubeadm.go:310] 	--control-plane 
	I0912 21:46:58.664338   69940 kubeadm.go:310] 
	I0912 21:46:58.664503   69940 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 21:46:58.664514   69940 kubeadm.go:310] 
	I0912 21:46:58.664680   69940 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token eviub1.04vr1snjz6iiyfq2 \
	I0912 21:46:58.664893   69940 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:40c3b7faca2e9b71afa48ddc2040f3d4018e7f54d7a41f332b5ec5aea93a2e14 
	I0912 21:46:58.670534   69940 kubeadm.go:310] W0912 21:46:46.935997    1691 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:46:58.671125   69940 kubeadm.go:310] W0912 21:46:46.937105    1691 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:46:58.671368   69940 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 21:46:58.671446   69940 cni.go:84] Creating CNI manager for ""
	I0912 21:46:58.671476   69940 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 21:46:58.675563   69940 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 21:46:58.679321   69940 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 21:46:58.696746   69940 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 21:46:58.733918   69940 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 21:46:58.734041   69940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-331995 minikube.k8s.io/updated_at=2024_09_12T21_46_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=addons-331995 minikube.k8s.io/primary=true
	I0912 21:46:58.733936   69940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:46:58.962174   69940 ops.go:34] apiserver oom_adj: -16
	I0912 21:46:58.962316   69940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:46:59.463281   69940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:46:59.963412   69940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:47:00.462514   69940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:47:00.962887   69940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:47:01.462919   69940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:47:01.963111   69940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:47:02.463357   69940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:47:02.600871   69940 kubeadm.go:1113] duration metric: took 3.867041357s to wait for elevateKubeSystemPrivileges
	I0912 21:47:02.600910   69940 kubeadm.go:394] duration metric: took 15.920243177s to StartCluster
	I0912 21:47:02.600941   69940 settings.go:142] acquiring lock: {Name:mk841109e15a3b6330c92e1dba5779a890c1b040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:47:02.601346   69940 settings.go:150] Updating kubeconfig:  /home/g528047478195_compute/minikube-integration/19616-63719/kubeconfig
	I0912 21:47:02.602154   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/kubeconfig: {Name:mk2e26d24f77797e24558e31cf6990f1997e9f71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:47:02.602665   69940 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 21:47:02.602904   69940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 21:47:02.603371   69940 config.go:182] Loaded profile config "addons-331995": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 21:47:02.603431   69940 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0912 21:47:02.603550   69940 addons.go:69] Setting yakd=true in profile "addons-331995"
	I0912 21:47:02.603599   69940 addons.go:234] Setting addon yakd=true in "addons-331995"
	I0912 21:47:02.603648   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.604861   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.605169   69940 addons.go:69] Setting inspektor-gadget=true in profile "addons-331995"
	I0912 21:47:02.605210   69940 addons.go:234] Setting addon inspektor-gadget=true in "addons-331995"
	I0912 21:47:02.605248   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.605971   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.606728   69940 addons.go:69] Setting metrics-server=true in profile "addons-331995"
	I0912 21:47:02.606798   69940 addons.go:234] Setting addon metrics-server=true in "addons-331995"
	I0912 21:47:02.606823   69940 addons.go:69] Setting cloud-spanner=true in profile "addons-331995"
	I0912 21:47:02.606843   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.606860   69940 addons.go:234] Setting addon cloud-spanner=true in "addons-331995"
	I0912 21:47:02.606915   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.607562   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.607566   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.610945   69940 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-331995"
	I0912 21:47:02.611091   69940 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-331995"
	I0912 21:47:02.611151   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.611459   69940 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-331995"
	I0912 21:47:02.611634   69940 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-331995"
	I0912 21:47:02.611845   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.612098   69940 addons.go:69] Setting default-storageclass=true in profile "addons-331995"
	I0912 21:47:02.612185   69940 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-331995"
	I0912 21:47:02.612991   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.613946   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.622156   69940 addons.go:69] Setting gcp-auth=true in profile "addons-331995"
	I0912 21:47:02.622217   69940 mustload.go:65] Loading cluster: addons-331995
	I0912 21:47:02.622736   69940 config.go:182] Loaded profile config "addons-331995": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 21:47:02.623313   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.623828   69940 addons.go:69] Setting registry=true in profile "addons-331995"
	I0912 21:47:02.623905   69940 addons.go:234] Setting addon registry=true in "addons-331995"
	I0912 21:47:02.623969   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.624827   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.634347   69940 addons.go:69] Setting helm-tiller=true in profile "addons-331995"
	I0912 21:47:02.634449   69940 addons.go:234] Setting addon helm-tiller=true in "addons-331995"
	I0912 21:47:02.634516   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.635355   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.638783   69940 addons.go:69] Setting storage-provisioner=true in profile "addons-331995"
	I0912 21:47:02.638869   69940 addons.go:234] Setting addon storage-provisioner=true in "addons-331995"
	I0912 21:47:02.638926   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.639961   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.661949   69940 addons.go:69] Setting ingress=true in profile "addons-331995"
	I0912 21:47:02.662081   69940 addons.go:234] Setting addon ingress=true in "addons-331995"
	I0912 21:47:02.662165   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.663043   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.666400   69940 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-331995"
	I0912 21:47:02.666498   69940 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-331995"
	I0912 21:47:02.667041   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.680380   69940 addons.go:69] Setting ingress-dns=true in profile "addons-331995"
	I0912 21:47:02.680461   69940 addons.go:234] Setting addon ingress-dns=true in "addons-331995"
	I0912 21:47:02.680533   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.681382   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.688341   69940 addons.go:69] Setting volcano=true in profile "addons-331995"
	I0912 21:47:02.688557   69940 addons.go:234] Setting addon volcano=true in "addons-331995"
	I0912 21:47:02.688676   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.689759   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.721377   69940 addons.go:69] Setting volumesnapshots=true in profile "addons-331995"
	I0912 21:47:02.721484   69940 addons.go:234] Setting addon volumesnapshots=true in "addons-331995"
	I0912 21:47:02.721559   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.722633   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.725963   69940 out.go:177] * Verifying Kubernetes components...
	I0912 21:47:02.732168   69940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:47:02.861576   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.895815   69940 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0912 21:47:02.903665   69940 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0912 21:47:02.903790   69940 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0912 21:47:02.903966   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:02.907465   69940 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0912 21:47:02.914328   69940 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0912 21:47:02.914445   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0912 21:47:02.914625   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.142536   69940 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0912 21:47:03.149399   69940 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 21:47:03.149894   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0912 21:47:03.153117   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.238795   69940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0912 21:47:03.252154   69940 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0912 21:47:03.257230   69940 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0912 21:47:03.257275   69940 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0912 21:47:03.257439   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.297455   69940 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0912 21:47:03.306985   69940 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 21:47:03.307107   69940 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 21:47:03.307254   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.357918   69940 out.go:177]   - Using image docker.io/registry:2.8.3
	I0912 21:47:03.382397   69940 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0912 21:47:03.390349   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:03.395776   69940 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0912 21:47:03.395944   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0912 21:47:03.396340   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.424211   69940 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0912 21:47:03.429257   69940 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 21:47:03.429291   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0912 21:47:03.429398   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.466592   69940 addons.go:234] Setting addon default-storageclass=true in "addons-331995"
	I0912 21:47:03.466783   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:03.467802   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:03.475103   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:03.479671   69940 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0912 21:47:03.483156   69940 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0912 21:47:03.483288   69940 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0912 21:47:03.483476   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.514733   69940 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-331995"
	I0912 21:47:03.514802   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:03.515898   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:03.528988   69940 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0912 21:47:03.537921   69940 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0912 21:47:03.544167   69940 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:47:03.549107   69940 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0912 21:47:03.552093   69940 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:47:03.554801   69940 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 21:47:03.555853   69940 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0912 21:47:03.561225   69940 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:47:03.561283   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 21:47:03.561412   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.570113   69940 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0912 21:47:03.575932   69940 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0912 21:47:03.581860   69940 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0912 21:47:03.556334   69940 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 21:47:03.582123   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0912 21:47:03.582084   69940 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0912 21:47:03.582318   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.586243   69940 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0912 21:47:03.586272   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0912 21:47:03.586375   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.627136   69940 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0912 21:47:03.635130   69940 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0912 21:47:03.636530   69940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:47:03.638576   69940 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0912 21:47:03.638612   69940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0912 21:47:03.638744   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.735887   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:03.812641   69940 cli_runner.go:217] Completed: docker container inspect addons-331995 --format={{.State.Status}}: (1.122817175s)
	I0912 21:47:03.818157   69940 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0912 21:47:03.822292   69940 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0912 21:47:03.831657   69940 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0912 21:47:03.842804   69940 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0912 21:47:03.842919   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0912 21:47:03.843207   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.861319   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:03.886757   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:03.907313   69940 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0912 21:47:03.910965   69940 out.go:177]   - Using image docker.io/busybox:stable
	I0912 21:47:03.916784   69940 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 21:47:03.916820   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0912 21:47:03.916950   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:04.041952   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.105330   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.108358   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.225280   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.245470   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.261341   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.263915   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.303219   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.308767   69940 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 21:47:04.308813   69940 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 21:47:04.308917   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:04.332244   69940 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0912 21:47:04.332277   69940 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0912 21:47:04.334487   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.337683   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.393592   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.622506   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0912 21:47:04.638973   69940 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0912 21:47:04.639109   69940 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0912 21:47:04.744426   69940 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0912 21:47:04.744542   69940 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0912 21:47:04.783484   69940 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 21:47:04.783633   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0912 21:47:04.916138   69940 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0912 21:47:04.916175   69940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0912 21:47:05.045377   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 21:47:05.096299   69940 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0912 21:47:05.096334   69940 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0912 21:47:05.121206   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 21:47:05.124081   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 21:47:05.154526   69940 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 21:47:05.154560   69940 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 21:47:05.158505   69940 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0912 21:47:05.158555   69940 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0912 21:47:05.264483   69940 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0912 21:47:05.264518   69940 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0912 21:47:05.264803   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 21:47:05.377281   69940 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0912 21:47:05.377313   69940 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0912 21:47:05.409810   69940 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0912 21:47:05.409844   69940 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0912 21:47:05.430531   69940 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0912 21:47:05.430567   69940 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0912 21:47:05.444452   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:47:05.459012   69940 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0912 21:47:05.459074   69940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0912 21:47:05.470543   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 21:47:05.511380   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0912 21:47:05.538545   69940 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0912 21:47:05.538586   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0912 21:47:05.738940   69940 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 21:47:05.738976   69940 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 21:47:05.796139   69940 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0912 21:47:05.796176   69940 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0912 21:47:05.918486   69940 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0912 21:47:05.918517   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0912 21:47:05.924031   69940 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0912 21:47:05.924077   69940 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0912 21:47:05.949069   69940 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0912 21:47:05.949103   69940 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0912 21:47:05.969506   69940 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0912 21:47:05.969537   69940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0912 21:47:06.015842   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0912 21:47:06.197510   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 21:47:06.281535   69940 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0912 21:47:06.281571   69940 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0912 21:47:06.364447   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0912 21:47:06.439229   69940 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0912 21:47:06.439269   69940 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0912 21:47:06.469286   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0912 21:47:06.483718   69940 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0912 21:47:06.483758   69940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0912 21:47:06.831412   69940 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0912 21:47:06.831452   69940 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0912 21:47:06.954365   69940 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.715445561s)
	I0912 21:47:06.954419   69940 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0912 21:47:06.956396   69940 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.319778628s)
	I0912 21:47:06.958067   69940 node_ready.go:35] waiting up to 6m0s for node "addons-331995" to be "Ready" ...
	I0912 21:47:07.016366   69940 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0912 21:47:07.016408   69940 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0912 21:47:07.312358   69940 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0912 21:47:07.312393   69940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0912 21:47:07.665908   69940 node_ready.go:49] node "addons-331995" has status "Ready":"True"
	I0912 21:47:07.665943   69940 node_ready.go:38] duration metric: took 707.835474ms for node "addons-331995" to be "Ready" ...
	I0912 21:47:07.665960   69940 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:47:07.726775   69940 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 21:47:07.726813   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0912 21:47:07.758287   69940 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:47:07.758315   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0912 21:47:08.022483   69940 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0912 21:47:08.022516   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0912 21:47:08.431035   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 21:47:08.515447   69940 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0912 21:47:08.515484   69940 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0912 21:47:08.535153   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:47:08.737345   69940 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0912 21:47:08.737387   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0912 21:47:09.208337   69940 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0912 21:47:09.208371   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	W0912 21:47:09.520267   69940 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "addons-331995" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0912 21:47:09.520313   69940 start.go:160] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0912 21:47:09.557224   69940 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:09.882093   69940 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 21:47:09.882151   69940 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0912 21:47:10.346376   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 21:47:12.850190   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:47:05 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0912 21:47:13.749314   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.126647445s)
	I0912 21:47:15.219653   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:17.632114   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:19.646249   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (14.524985272s)
	I0912 21:47:19.646390   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (14.52227182s)
	I0912 21:47:19.646545   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (14.601133488s)
	I0912 21:47:20.416242   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:22.604351   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:22.990670   69940 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0912 21:47:22.990997   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:23.090291   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:23.460850   69940 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0912 21:47:23.584505   69940 addons.go:234] Setting addon gcp-auth=true in "addons-331995"
	I0912 21:47:23.584578   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:23.585417   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:23.656267   69940 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0912 21:47:23.656420   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:23.728843   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:24.675477   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:27.096207   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:29.217989   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:29.317812   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (24.052968507s)
	I0912 21:47:29.317860   69940 addons.go:475] Verifying addon ingress=true in "addons-331995"
	I0912 21:47:29.318321   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (23.873826599s)
	I0912 21:47:29.318432   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (23.84785425s)
	I0912 21:47:29.321120   69940 out.go:177] * Verifying ingress addon...
	I0912 21:47:29.326081   69940 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0912 21:47:29.732256   69940 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0912 21:47:29.732377   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:29.928765   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:31.429067   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:31.507389   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:31.632544   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:32.205252   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:32.617438   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:33.020098   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:33.503156   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:33.702318   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:33.930399   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:34.686971   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:34.762704   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (29.251269421s)
	I0912 21:47:34.763091   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (28.747096607s)
	I0912 21:47:34.763174   69940 addons.go:475] Verifying addon registry=true in "addons-331995"
	I0912 21:47:34.763298   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (28.56573774s)
	I0912 21:47:34.763514   69940 addons.go:475] Verifying addon metrics-server=true in "addons-331995"
	I0912 21:47:34.763958   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (28.399451614s)
	I0912 21:47:34.764150   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (28.294818834s)
	I0912 21:47:34.764371   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (26.333282697s)
	I0912 21:47:34.764614   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (26.229423025s)
	W0912 21:47:34.764683   69940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 21:47:34.764762   69940 retry.go:31] will retry after 220.938937ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 21:47:34.767449   69940 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-331995 service yakd-dashboard -n yakd-dashboard
	
	I0912 21:47:34.767785   69940 out.go:177] * Verifying registry addon...
	I0912 21:47:34.773331   69940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0912 21:47:34.986820   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:47:35.378968   69940 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0912 21:47:35.379129   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:35.381623   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:35.887762   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:35.891746   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:36.069203   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:36.076593   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:36.081297   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:36.709261   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:36.712091   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:36.819876   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (26.473344456s)
	I0912 21:47:36.820037   69940 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-331995"
	I0912 21:47:36.820827   69940 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (13.164524688s)
	I0912 21:47:36.823949   69940 out.go:177] * Verifying csi-hostpath-driver addon...
	I0912 21:47:36.824346   69940 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0912 21:47:36.829299   69940 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:47:36.830212   69940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0912 21:47:36.832986   69940 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0912 21:47:36.833114   69940 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0912 21:47:36.976762   69940 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0912 21:47:36.976895   69940 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0912 21:47:37.044983   69940 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 21:47:37.045013   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0912 21:47:37.136168   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 21:47:37.239527   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:37.240456   69940 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0912 21:47:37.240548   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:37.242189   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:37.431859   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:37.590777   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:37.625272   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:38.181037   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:38.185271   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:38.190263   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:38.261548   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:38.620592   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:38.630363   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:38.635885   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:39.105382   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:39.110725   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:39.112399   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:39.323447   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:39.530324   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:39.530623   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:39.859110   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:39.860232   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:39.861290   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:39.905482   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.918592786s)
	I0912 21:47:40.154576   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.018266504s)
	I0912 21:47:40.162944   69940 addons.go:475] Verifying addon gcp-auth=true in "addons-331995"
	I0912 21:47:40.167283   69940 out.go:177] * Verifying gcp-auth addon...
	I0912 21:47:40.174036   69940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0912 21:47:40.198322   69940 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0912 21:47:40.299719   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:40.342674   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:40.355912   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:40.566976   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:40.778846   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:40.833578   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:40.838380   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:41.296727   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:41.344005   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:41.357447   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:41.787497   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:41.837073   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:41.865267   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:42.297170   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:42.351194   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:42.366806   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:42.586658   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:42.787775   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:42.850126   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:42.852545   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:43.285358   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:43.337027   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:43.349090   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:43.780318   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:43.833935   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:43.844103   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:44.280231   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:44.333967   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:44.343825   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:44.780782   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:44.836077   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:44.848509   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:45.077031   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:45.283193   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:45.336801   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:45.346384   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:45.786878   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:45.854758   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:45.860692   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:46.293332   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:46.346544   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:46.359759   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:46.787654   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:46.834326   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:46.839741   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:47.293360   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:47.335745   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:47.342239   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:47.570984   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:47.780263   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:47.836722   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:47.846490   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:48.111990   69940 pod_ready.go:93] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"True"
	I0912 21:47:48.112027   69940 pod_ready.go:82] duration metric: took 38.554754932s for pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.112045   69940 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vhwzq" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.133582   69940 pod_ready.go:93] pod "coredns-7c65d6cfc9-vhwzq" in "kube-system" namespace has status "Ready":"True"
	I0912 21:47:48.133620   69940 pod_ready.go:82] duration metric: took 21.507818ms for pod "coredns-7c65d6cfc9-vhwzq" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.133639   69940 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-331995" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.146783   69940 pod_ready.go:93] pod "etcd-addons-331995" in "kube-system" namespace has status "Ready":"True"
	I0912 21:47:48.146821   69940 pod_ready.go:82] duration metric: took 13.17035ms for pod "etcd-addons-331995" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.146865   69940 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-331995" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.161721   69940 pod_ready.go:93] pod "kube-apiserver-addons-331995" in "kube-system" namespace has status "Ready":"True"
	I0912 21:47:48.161794   69940 pod_ready.go:82] duration metric: took 14.911609ms for pod "kube-apiserver-addons-331995" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.161817   69940 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-331995" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.178200   69940 pod_ready.go:93] pod "kube-controller-manager-addons-331995" in "kube-system" namespace has status "Ready":"True"
	I0912 21:47:48.178257   69940 pod_ready.go:82] duration metric: took 16.427127ms for pod "kube-controller-manager-addons-331995" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.178277   69940 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9slnj" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.339924   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:48.345955   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:48.355610   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:48.467892   69940 pod_ready.go:93] pod "kube-proxy-9slnj" in "kube-system" namespace has status "Ready":"True"
	I0912 21:47:48.467926   69940 pod_ready.go:82] duration metric: took 289.63631ms for pod "kube-proxy-9slnj" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.467943   69940 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-331995" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.783963   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:48.835169   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:48.844946   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:48.866724   69940 pod_ready.go:93] pod "kube-scheduler-addons-331995" in "kube-system" namespace has status "Ready":"True"
	I0912 21:47:48.866763   69940 pod_ready.go:82] duration metric: took 398.808683ms for pod "kube-scheduler-addons-331995" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.866777   69940 pod_ready.go:39] duration metric: took 41.200798834s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:47:48.866812   69940 api_server.go:52] waiting for apiserver process to appear ...
	I0912 21:47:48.866916   69940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:47:48.906923   69940 api_server.go:72] duration metric: took 46.304203247s to wait for apiserver process to appear ...
	I0912 21:47:48.907127   69940 api_server.go:88] waiting for apiserver healthz status ...
	I0912 21:47:48.907205   69940 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0912 21:47:48.917509   69940 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0912 21:47:48.919581   69940 api_server.go:141] control plane version: v1.31.1
	I0912 21:47:48.919688   69940 api_server.go:131] duration metric: took 12.503799ms to wait for apiserver health ...
	I0912 21:47:48.919727   69940 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 21:47:49.084148   69940 system_pods.go:59] 19 kube-system pods found
	I0912 21:47:49.084219   69940 system_pods.go:61] "coredns-7c65d6cfc9-6p998" [18897be7-b902-4875-b941-ae33609d6ad3] Running
	I0912 21:47:49.084312   69940 system_pods.go:61] "coredns-7c65d6cfc9-vhwzq" [15cf2078-0cd4-4aee-af43-8e6982db1d9f] Running
	I0912 21:47:49.084409   69940 system_pods.go:61] "csi-hostpath-attacher-0" [876fb446-0031-409c-9b88-6ee9dbab79e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 21:47:49.084439   69940 system_pods.go:61] "csi-hostpath-resizer-0" [ca55e341-cd77-4896-92d1-1e316d2d7b95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 21:47:49.084502   69940 system_pods.go:61] "csi-hostpathplugin-ssw8n" [0fae5f54-937e-4802-b06f-184fcefc7ded] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 21:47:49.084535   69940 system_pods.go:61] "etcd-addons-331995" [ff9bbf64-85f3-4174-98da-3f3fff1de6e6] Running
	I0912 21:47:49.084549   69940 system_pods.go:61] "kube-apiserver-addons-331995" [441badbc-59a0-417e-8e96-21fce09febc8] Running
	I0912 21:47:49.084582   69940 system_pods.go:61] "kube-controller-manager-addons-331995" [32b0fae0-1656-4310-a0db-8ac8ebd06b24] Running
	I0912 21:47:49.084603   69940 system_pods.go:61] "kube-ingress-dns-minikube" [ff5b18d1-e63d-4b20-9301-4fe6a2d2b7f6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0912 21:47:49.084628   69940 system_pods.go:61] "kube-proxy-9slnj" [fa2c6af2-4383-4d79-a6b6-8eee8fa882ef] Running
	I0912 21:47:49.084642   69940 system_pods.go:61] "kube-scheduler-addons-331995" [757291c4-4137-4c38-b3bf-c797be72627f] Running
	I0912 21:47:49.084656   69940 system_pods.go:61] "metrics-server-84c5f94fbc-qj8c7" [10575e3b-51e3-4a17-9911-8ed2245ed9c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 21:47:49.084667   69940 system_pods.go:61] "nvidia-device-plugin-daemonset-4sqcf" [2bb7e4c9-91fb-4914-ab76-7ffc5517e40d] Running
	I0912 21:47:49.084675   69940 system_pods.go:61] "registry-66c9cd494c-6jhvv" [07e442c7-a078-4cde-aa3c-fad57aac4c18] Running
	I0912 21:47:49.084710   69940 system_pods.go:61] "registry-proxy-rr7bm" [c0051f59-18dc-4684-b682-f4a992ea12a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 21:47:49.084729   69940 system_pods.go:61] "snapshot-controller-56fcc65765-mjfwh" [9163427d-b44f-4930-96ac-3890d3cf0f2b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:47:49.084767   69940 system_pods.go:61] "snapshot-controller-56fcc65765-zbzzp" [b78b8a57-7f74-4cfa-a4e6-cd9904249298] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:47:49.084780   69940 system_pods.go:61] "storage-provisioner" [e82e018d-5b98-4b82-a817-a66b52cebf28] Running
	I0912 21:47:49.084792   69940 system_pods.go:61] "tiller-deploy-b48cc5f79-pxwc5" [c1f59ae3-6c11-42a9-b480-6fb541264acf] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0912 21:47:49.084807   69940 system_pods.go:74] duration metric: took 165.048833ms to wait for pod list to return data ...
	I0912 21:47:49.084847   69940 default_sa.go:34] waiting for default service account to be created ...
	I0912 21:47:49.265495   69940 default_sa.go:45] found service account: "default"
	I0912 21:47:49.265616   69940 default_sa.go:55] duration metric: took 180.740943ms for default service account to be created ...
	I0912 21:47:49.265697   69940 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 21:47:49.281459   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:49.347172   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:49.347833   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:49.497282   69940 system_pods.go:86] 19 kube-system pods found
	I0912 21:47:49.497412   69940 system_pods.go:89] "coredns-7c65d6cfc9-6p998" [18897be7-b902-4875-b941-ae33609d6ad3] Running
	I0912 21:47:49.497450   69940 system_pods.go:89] "coredns-7c65d6cfc9-vhwzq" [15cf2078-0cd4-4aee-af43-8e6982db1d9f] Running
	I0912 21:47:49.497485   69940 system_pods.go:89] "csi-hostpath-attacher-0" [876fb446-0031-409c-9b88-6ee9dbab79e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 21:47:49.497529   69940 system_pods.go:89] "csi-hostpath-resizer-0" [ca55e341-cd77-4896-92d1-1e316d2d7b95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 21:47:49.497594   69940 system_pods.go:89] "csi-hostpathplugin-ssw8n" [0fae5f54-937e-4802-b06f-184fcefc7ded] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 21:47:49.497638   69940 system_pods.go:89] "etcd-addons-331995" [ff9bbf64-85f3-4174-98da-3f3fff1de6e6] Running
	I0912 21:47:49.497679   69940 system_pods.go:89] "kube-apiserver-addons-331995" [441badbc-59a0-417e-8e96-21fce09febc8] Running
	I0912 21:47:49.497710   69940 system_pods.go:89] "kube-controller-manager-addons-331995" [32b0fae0-1656-4310-a0db-8ac8ebd06b24] Running
	I0912 21:47:49.497754   69940 system_pods.go:89] "kube-ingress-dns-minikube" [ff5b18d1-e63d-4b20-9301-4fe6a2d2b7f6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0912 21:47:49.497784   69940 system_pods.go:89] "kube-proxy-9slnj" [fa2c6af2-4383-4d79-a6b6-8eee8fa882ef] Running
	I0912 21:47:49.497813   69940 system_pods.go:89] "kube-scheduler-addons-331995" [757291c4-4137-4c38-b3bf-c797be72627f] Running
	I0912 21:47:49.497917   69940 system_pods.go:89] "metrics-server-84c5f94fbc-qj8c7" [10575e3b-51e3-4a17-9911-8ed2245ed9c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 21:47:49.497977   69940 system_pods.go:89] "nvidia-device-plugin-daemonset-4sqcf" [2bb7e4c9-91fb-4914-ab76-7ffc5517e40d] Running
	I0912 21:47:49.498008   69940 system_pods.go:89] "registry-66c9cd494c-6jhvv" [07e442c7-a078-4cde-aa3c-fad57aac4c18] Running
	I0912 21:47:49.498039   69940 system_pods.go:89] "registry-proxy-rr7bm" [c0051f59-18dc-4684-b682-f4a992ea12a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 21:47:49.498248   69940 system_pods.go:89] "snapshot-controller-56fcc65765-mjfwh" [9163427d-b44f-4930-96ac-3890d3cf0f2b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:47:49.498317   69940 system_pods.go:89] "snapshot-controller-56fcc65765-zbzzp" [b78b8a57-7f74-4cfa-a4e6-cd9904249298] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:47:49.498357   69940 system_pods.go:89] "storage-provisioner" [e82e018d-5b98-4b82-a817-a66b52cebf28] Running
	I0912 21:47:49.498402   69940 system_pods.go:89] "tiller-deploy-b48cc5f79-pxwc5" [c1f59ae3-6c11-42a9-b480-6fb541264acf] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0912 21:47:49.498435   69940 system_pods.go:126] duration metric: took 232.695943ms to wait for k8s-apps to be running ...
	I0912 21:47:49.498496   69940 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 21:47:49.498640   69940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:47:49.529897   69940 system_svc.go:56] duration metric: took 31.369692ms WaitForService to wait for kubelet
	I0912 21:47:49.530021   69940 kubeadm.go:582] duration metric: took 46.927304422s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:47:49.530118   69940 node_conditions.go:102] verifying NodePressure condition ...
	I0912 21:47:49.666511   69940 node_conditions.go:122] node storage ephemeral capacity is 119475748Ki
	I0912 21:47:49.666619   69940 node_conditions.go:123] node cpu capacity is 2
	I0912 21:47:49.666660   69940 node_conditions.go:105] duration metric: took 136.516491ms to run NodePressure ...
	I0912 21:47:49.666701   69940 start.go:241] waiting for startup goroutines ...
	I0912 21:47:49.780615   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:49.837598   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:49.848634   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:50.281807   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:50.341857   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:50.342789   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:50.788610   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:50.842673   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:50.853365   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:51.291597   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:51.340761   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:51.348012   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:51.780288   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:51.838813   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:51.839797   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:52.313936   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:52.332715   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:52.338834   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:52.815908   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:53.285926   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:53.286190   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:53.289150   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:53.505094   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:53.510458   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:53.787014   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:53.833651   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:53.840327   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:54.282694   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:54.349518   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:54.383360   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:54.781892   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:54.842943   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:54.858943   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:55.293466   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:55.395172   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:55.402006   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:55.803905   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:55.868250   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:55.870919   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:56.281081   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:56.378063   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:56.380009   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:56.796621   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:56.835939   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:56.857625   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:57.284270   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:57.357764   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:57.369570   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:57.792424   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:57.859754   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:57.863344   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:58.311602   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:58.374134   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:58.387188   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:58.792978   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:58.843290   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:58.847855   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:59.316158   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:59.361827   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:59.364224   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:59.783856   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:59.845956   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:59.846292   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:00.281275   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:48:00.335770   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:00.341044   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:00.797268   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:48:00.911147   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:00.913622   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:01.296983   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:48:01.345232   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:01.347647   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:01.781722   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:48:01.839524   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:01.846260   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:02.283565   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:48:02.343628   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:02.351046   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:02.779305   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:48:02.835451   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:02.840581   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:03.279078   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:48:03.331903   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:03.338348   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:03.907243   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:48:03.908111   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:03.909201   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:04.281528   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:48:04.368668   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:04.369818   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:04.782038   69940 kapi.go:107] duration metric: took 30.00871618s to wait for kubernetes.io/minikube-addons=registry ...
	I0912 21:48:04.837527   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:04.854216   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:05.345185   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:05.358722   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:05.846452   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:05.849874   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:06.445916   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:06.448160   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:06.837661   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:06.844340   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:07.346911   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:07.353814   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:07.862252   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:07.876006   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:08.338248   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:08.343179   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:08.837461   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:08.842322   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:09.343939   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:09.344406   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:09.836939   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:09.842267   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:10.336085   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:10.341187   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:10.834379   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:10.842322   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:11.343715   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:11.345690   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:11.880794   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:11.931834   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:12.341960   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:12.362249   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:12.840366   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:12.849230   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:13.333543   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:13.340524   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:13.965430   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:13.965585   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:14.354004   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:14.376525   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:14.833545   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:14.843541   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:15.366288   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:15.368944   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:15.841170   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:15.845248   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:16.342332   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:16.349164   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:16.834596   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:16.847026   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:17.345073   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:17.355102   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:17.835110   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:17.839071   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:18.334023   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:18.337277   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:18.832952   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:18.839601   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:19.363250   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:19.382456   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:19.864159   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:19.867717   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:20.349792   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:20.356498   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:20.878162   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:20.880972   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:21.340948   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:21.354279   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:21.873508   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:21.921313   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:22.383708   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:22.386982   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:22.881680   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:22.895588   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:23.469177   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:23.469661   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:23.832928   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:23.839105   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:24.349533   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:24.352326   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:24.837638   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:24.847357   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:25.334093   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:25.339092   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:25.869685   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:25.873003   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:26.373739   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:26.376780   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:26.838194   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:26.846160   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:27.340485   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:27.344513   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:27.833289   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:27.856523   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:28.358759   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:28.359862   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:28.864939   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:28.869108   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:29.339928   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:29.348006   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:29.896299   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:29.908634   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:30.372334   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:30.377115   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:30.844342   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:30.876968   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:31.334505   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:31.342542   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:31.844204   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:31.847610   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:32.358253   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:32.361038   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:32.915862   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:32.917511   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:33.384528   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:33.419347   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:33.846687   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:33.891659   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:34.408627   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:34.410577   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:34.848290   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:34.848949   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:35.337969   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:35.343374   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:35.853466   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:35.853886   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:36.337954   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:36.347897   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:36.831720   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:36.838040   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:37.340281   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:37.347709   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:37.855496   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:37.860240   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:38.341181   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:38.349659   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:38.875524   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:38.902936   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:39.356499   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:39.459946   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:39.871696   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:39.878903   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:40.409357   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:40.412029   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:40.863988   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:40.864466   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:41.396587   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:41.400377   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:41.897185   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:41.915623   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:42.335705   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:42.349070   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:43.308040   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:43.336963   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:43.516434   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:43.516495   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:43.888510   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:43.889323   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:44.346794   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:44.348904   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:44.890648   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:44.912026   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:45.337345   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:45.343562   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:45.834461   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:45.840709   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:46.340196   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:46.343223   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:46.837234   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:46.851857   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:47.338127   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:47.342891   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:47.985386   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:47.985859   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:48.332506   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:48.340282   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:48.863219   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:48.865849   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:49.348678   69940 kapi.go:107] duration metric: took 1m20.022596727s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0912 21:48:49.352744   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:49.847370   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:50.343745   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:50.844397   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:51.341527   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:51.885138   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:52.370797   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:52.838083   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:53.344804   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:53.839805   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:54.365716   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:54.840204   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:55.345739   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:55.841982   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:56.346862   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:56.846779   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:57.339284   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:57.840462   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:58.338320   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:58.840341   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:59.341618   69940 kapi.go:107] duration metric: took 1m22.51138833s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0912 21:49:02.681560   69940 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0912 21:49:02.681596   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:03.180796   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:03.680304   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:04.187136   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:04.680145   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:05.183753   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:05.679814   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:06.181708   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:06.680435   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:07.181294   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:07.681947   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:08.180875   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:08.682374   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:09.179711   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:09.679769   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:10.180767   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:10.679862   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:11.180603   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:11.680792   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:12.180962   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:12.681368   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:13.180440   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:13.681810   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:14.180802   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:14.679794   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:15.179600   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:15.680814   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:16.181215   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:16.681273   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:17.180156   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:17.680284   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:18.182639   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:18.680544   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:19.182444   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:19.680525   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:20.181735   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:20.680984   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:21.180490   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:21.680684   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:22.181806   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:22.681187   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:23.181579   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:23.680972   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:24.180945   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:24.680600   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:25.180256   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:25.681395   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:26.181785   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:26.680511   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:27.180628   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:27.681970   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:28.184760   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:28.682773   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:29.180868   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:29.679939   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:30.180469   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:30.681287   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:31.181175   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:31.680027   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:32.180786   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:32.679772   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:33.181953   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:33.680175   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:34.182304   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:34.681356   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:35.180612   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:35.680760   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:36.182405   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:36.680943   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:37.180667   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:37.681004   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:38.187513   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:38.682010   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:39.181521   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:39.681364   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:40.181138   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:40.681040   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:41.179759   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:41.679997   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:42.181448   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:42.681181   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:43.182220   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:43.681089   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:44.184252   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:44.689166   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:45.186799   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:45.685242   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:46.181045   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:46.680807   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:47.180210   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:47.680829   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:48.180771   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:48.680530   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:49.180222   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:49.680014   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:50.181584   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:50.683729   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:51.181163   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:51.712333   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:52.181025   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:52.680155   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:53.180492   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:53.680716   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:54.180607   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:54.680830   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:55.180760   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:55.680589   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:56.180003   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:56.679918   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:57.181386   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:57.680913   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:58.187803   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:58.680782   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:59.182043   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:59.680570   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:00.181323   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:00.679811   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:01.179628   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:01.681395   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:02.190875   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:02.680916   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:03.180685   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:03.680704   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:04.186471   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:04.680946   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:05.180582   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:05.679485   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:06.182166   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:06.688006   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:07.181138   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:07.681345   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:08.182874   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:08.683536   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:09.187135   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:09.717830   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:10.181111   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:10.681221   69940 kapi.go:107] duration metric: took 2m30.507191094s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0912 21:50:10.684713   69940 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-331995 cluster.
	I0912 21:50:10.687552   69940 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0912 21:50:10.690283   69940 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0912 21:50:10.693168   69940 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner-rancher, storage-provisioner, default-storageclass, volcano, metrics-server, helm-tiller, inspektor-gadget, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0912 21:50:10.696159   69940 addons.go:510] duration metric: took 3m8.09271824s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner-rancher storage-provisioner default-storageclass volcano metrics-server helm-tiller inspektor-gadget yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0912 21:50:10.696264   69940 start.go:246] waiting for cluster config update ...
	I0912 21:50:10.696364   69940 start.go:255] writing updated cluster config ...
	I0912 21:50:10.697014   69940 ssh_runner.go:195] Run: rm -f paused
	I0912 21:50:11.234534   69940 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 21:50:11.237872   69940 out.go:177] * Done! kubectl is now configured to use "addons-331995" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 12 21:49:29 addons-331995 dockerd[1164]: time="2024-09-12T21:49:29.062276194Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 12 21:49:29 addons-331995 dockerd[1164]: time="2024-09-12T21:49:29.062314334Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 12 21:49:29 addons-331995 dockerd[1164]: time="2024-09-12T21:49:29.066642613Z" level=error msg="Error running exec 888bbd2d8de983e97d3d9bdad4f4a079fbb7f3c511ce60fe8f9721f11547ee80 in container: OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown"
	Sep 12 21:49:29 addons-331995 dockerd[1164]: time="2024-09-12T21:49:29.108916759Z" level=info msg="ignoring event" container=12f765164cc5bb68284187945f635cd37e6907e38a20134698897f7d5097d46c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:49:44 addons-331995 cri-dockerd[1419]: time="2024-09-12T21:49:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6bcadff3d17186caa2d6e95f24e54c7bfcbbf81da3c28876bfbcd15191f24133/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local us-east1-b.c.p79a29526b6c1e63c-tp.internal c.p79a29526b6c1e63c-tp.internal google.internal options ndots:5]"
	Sep 12 21:49:44 addons-331995 cri-dockerd[1419]: time="2024-09-12T21:49:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6db1451d901707963a5b3310aee240654f47b83bac1db5148df9d707c261164e/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local us-east1-b.c.p79a29526b6c1e63c-tp.internal c.p79a29526b6c1e63c-tp.internal google.internal options ndots:5]"
	Sep 12 21:49:44 addons-331995 dockerd[1164]: time="2024-09-12T21:49:44.669704000Z" level=info msg="ignoring event" container=c077177f2e0be98b5b28333585ace98441eaa8490cfa47b2066e2e8538491457 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:49:44 addons-331995 dockerd[1164]: time="2024-09-12T21:49:44.755154302Z" level=info msg="ignoring event" container=dbb34de247fbfa1714faef6913266a9bf159f4498b451b8902e79ff55c8038a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:49:46 addons-331995 dockerd[1164]: time="2024-09-12T21:49:46.907538752Z" level=info msg="ignoring event" container=6db1451d901707963a5b3310aee240654f47b83bac1db5148df9d707c261164e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:49:46 addons-331995 dockerd[1164]: time="2024-09-12T21:49:46.914032675Z" level=info msg="ignoring event" container=6bcadff3d17186caa2d6e95f24e54c7bfcbbf81da3c28876bfbcd15191f24133 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:50:06 addons-331995 cri-dockerd[1419]: time="2024-09-12T21:50:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad8b4b78fc3c3f51548410b4c6f1803895af83faf143bf84ba44053d24b3c6c8/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local us-east1-b.c.p79a29526b6c1e63c-tp.internal c.p79a29526b6c1e63c-tp.internal google.internal options ndots:5]"
	Sep 12 21:50:06 addons-331995 dockerd[1164]: time="2024-09-12T21:50:06.951589327Z" level=warning msg="reference for unknown type: " digest="sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb"
	Sep 12 21:50:09 addons-331995 cri-dockerd[1419]: time="2024-09-12T21:50:09Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb"
	Sep 12 21:50:23 addons-331995 cri-dockerd[1419]: time="2024-09-12T21:50:23Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 12 21:50:24 addons-331995 dockerd[1164]: time="2024-09-12T21:50:24.980454856Z" level=info msg="ignoring event" container=60abc06ed7138bc759081ecec8f589729e0890e48de2290f1d544de672cce469 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:50:31 addons-331995 cri-dockerd[1419]: time="2024-09-12T21:50:31Z" level=error msg="error getting RW layer size for container ID '12f765164cc5bb68284187945f635cd37e6907e38a20134698897f7d5097d46c': Error response from daemon: No such container: 12f765164cc5bb68284187945f635cd37e6907e38a20134698897f7d5097d46c"
	Sep 12 21:50:31 addons-331995 cri-dockerd[1419]: time="2024-09-12T21:50:31Z" level=error msg="Set backoffDuration to : 1m0s for container ID '12f765164cc5bb68284187945f635cd37e6907e38a20134698897f7d5097d46c'"
	Sep 12 21:51:56 addons-331995 cri-dockerd[1419]: time="2024-09-12T21:51:56Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 12 21:51:57 addons-331995 dockerd[1164]: time="2024-09-12T21:51:57.961783940Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 12 21:51:57 addons-331995 dockerd[1164]: time="2024-09-12T21:51:57.961831512Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 12 21:51:57 addons-331995 dockerd[1164]: time="2024-09-12T21:51:57.965526336Z" level=error msg="Error running exec 2ecf8ffd4d1df1924d84f6a78d9b6523d0259c5207ebd0e991016d01333592b0 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 21:51:57 addons-331995 dockerd[1164]: time="2024-09-12T21:51:57.986731034Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 12 21:51:57 addons-331995 dockerd[1164]: time="2024-09-12T21:51:57.986841341Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 12 21:51:57 addons-331995 dockerd[1164]: time="2024-09-12T21:51:57.993411727Z" level=error msg="Error running exec a46ad57d8ea9014401e9f08fefb1a9bd607ef0fad646a30a31741480fbf5a23a in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 21:51:58 addons-331995 dockerd[1164]: time="2024-09-12T21:51:58.041321288Z" level=info msg="ignoring event" container=5814a023664fc24ce6516b6abfa209be4f2df66875a6d3631de3081fdd854363 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	5814a023664fc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            About a minute ago   Exited              gadget                                   5                   f942917293ca1       gadget-qxkpk
	cd2d86123e208       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 3 minutes ago        Running             gcp-auth                                 0                   ad8b4b78fc3c3       gcp-auth-89d5ffd79-zc25n
	6e596f69b1c3e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          4 minutes ago        Running             csi-snapshotter                          0                   cd5cf7c706c63       csi-hostpathplugin-ssw8n
	f750ad8c5413c       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          4 minutes ago        Running             csi-provisioner                          0                   cd5cf7c706c63       csi-hostpathplugin-ssw8n
	e457284d20950       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            4 minutes ago        Running             liveness-probe                           0                   cd5cf7c706c63       csi-hostpathplugin-ssw8n
	1883839c5f1b9       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           4 minutes ago        Running             hostpath                                 0                   cd5cf7c706c63       csi-hostpathplugin-ssw8n
	3e32a354279ae       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                4 minutes ago        Running             node-driver-registrar                    0                   cd5cf7c706c63       csi-hostpathplugin-ssw8n
	d7956ca6ad484       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce                             4 minutes ago        Running             controller                               0                   1baabe6295ccb       ingress-nginx-controller-bc57996ff-ghbdz
	b1a811d4b1298       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              4 minutes ago        Running             csi-resizer                              0                   a944326c09099       csi-hostpath-resizer-0
	f69d1097d49de       volcanosh/vc-webhook-manager@sha256:31e8c7adc6859e582b8edd053e2e926409bcfd1bf39e3a10d05949f7738144c4                                         4 minutes ago        Running             admission                                0                   cbf92391595a3       volcano-admission-77d7d48b68-86lvv
	4c0037e50b94b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   4 minutes ago        Running             csi-external-health-monitor-controller   0                   cd5cf7c706c63       csi-hostpathplugin-ssw8n
	4ecad4584e7a2       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             4 minutes ago        Running             csi-attacher                             0                   7269db6cfe123       csi-hostpath-attacher-0
	36ea06f7c3982       volcanosh/vc-scheduler@sha256:1ebc36090a981cb8bd703f9e9842f8e0a53ef6bf9034d51defc1ea689f38a60f                                               5 minutes ago        Running             volcano-scheduler                        0                   499b7c798ffe6       volcano-scheduler-576bc46687-x2rvr
	4c84e0633820e       ce263a8653f9c                                                                                                                                5 minutes ago        Exited              patch                                    1                   2ddb9c21b6388       ingress-nginx-admission-patch-vjgh7
	7e2523ea2f227       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      5 minutes ago        Running             volume-snapshot-controller               0                   ad6f30706accb       snapshot-controller-56fcc65765-mjfwh
	2b07f863c83cf       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   5 minutes ago        Exited              create                                   0                   b46d1a8ec1c88       ingress-nginx-admission-create-62fw7
	518419862f565       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      5 minutes ago        Running             volume-snapshot-controller               0                   279564e701505       snapshot-controller-56fcc65765-zbzzp
	3eab7e4cc8a0b       volcanosh/vc-controller-manager@sha256:d1337c3af008318577ca718a7f35b75cefc1071a35749c4f9430035abd4fbc93                                      5 minutes ago        Running             volcano-controllers                      0                   9d35c921c6266       volcano-controllers-56675bb4d5-2xcbp
	24994544e183d       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        5 minutes ago        Running             yakd                                     0                   e086acc613350       yakd-dashboard-67d98fc6b-xqz4j
	93726d79c636f       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       5 minutes ago        Running             local-path-provisioner                   0                   cddc341984351       local-path-provisioner-86d989889c-hn7cz
	1c867a7319b30       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  5 minutes ago        Running             tiller                                   0                   6a92cecac9cb8       tiller-deploy-b48cc5f79-pxwc5
	15a61fdf28b5a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              5 minutes ago        Running             registry-proxy                           0                   de6b8c54852a3       registry-proxy-rr7bm
	872989ed73e1e       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        5 minutes ago        Running             metrics-server                           0                   068137373fe7b       metrics-server-84c5f94fbc-qj8c7
	30fad740af987       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             5 minutes ago        Running             minikube-ingress-dns                     0                   238cdd11405a9       kube-ingress-dns-minikube
	2c66e67b210eb       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             5 minutes ago        Running             registry                                 0                   c0fe77c8ba99b       registry-66c9cd494c-6jhvv
	b71da04555c07       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               5 minutes ago        Running             cloud-spanner-emulator                   0                   450b1183c12b9       cloud-spanner-emulator-769b77f747-2cg5r
	046f859adeec7       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     6 minutes ago        Running             nvidia-device-plugin-ctr                 0                   9570cbdbb7023       nvidia-device-plugin-daemonset-4sqcf
	f30d0ccccc620       6e38f40d628db                                                                                                                                6 minutes ago        Running             storage-provisioner                      0                   ecc35d17bf51e       storage-provisioner
	868a9e54f79bc       c69fa2e9cbf5f                                                                                                                                6 minutes ago        Running             coredns                                  0                   ff0c0e5a01423       coredns-7c65d6cfc9-6p998
	445dc44b267e6       c69fa2e9cbf5f                                                                                                                                6 minutes ago        Running             coredns                                  0                   5e8404ae6fe81       coredns-7c65d6cfc9-vhwzq
	7af155869329b       60c005f310ff3                                                                                                                                6 minutes ago        Running             kube-proxy                               0                   d34ada0abb6ee       kube-proxy-9slnj
	b880a4debed4b       6bab7719df100                                                                                                                                6 minutes ago        Running             kube-apiserver                           0                   5abe04d9f008b       kube-apiserver-addons-331995
	f9f9ebe55863b       2e96e5913fc06                                                                                                                                6 minutes ago        Running             etcd                                     0                   1a319d26cdb5b       etcd-addons-331995
	29fcb0173534b       175ffd71cce3d                                                                                                                                6 minutes ago        Running             kube-controller-manager                  0                   c1ddcceb5e23b       kube-controller-manager-addons-331995
	86c19ec99efff       9aa1fad941575                                                                                                                                6 minutes ago        Running             kube-scheduler                           0                   6ff136ae32ba3       kube-scheduler-addons-331995
	
	
	==> controller_ingress [d7956ca6ad48] <==
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	W0912 21:48:48.797338       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0912 21:48:48.797927       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0912 21:48:48.816437       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/amd64"
	I0912 21:48:49.543865       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0912 21:48:49.617987       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0912 21:48:49.655358       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0912 21:48:49.776800       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"6abed3cf-112e-49b8-bc6e-393ac8803cf8", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0912 21:48:49.782660       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"16fc4b97-9f7e-4b9b-ad0c-aae8bfb964d7", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0912 21:48:49.782963       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"aa7fa93d-6d21-4683-998d-ae89c6b2aa34", APIVersion:"v1", ResourceVersion:"696", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0912 21:48:50.907379       7 nginx.go:317] "Starting NGINX process"
	I0912 21:48:50.926313       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0912 21:48:50.939770       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0912 21:48:50.943487       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0912 21:48:50.970788       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0912 21:48:50.971155       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-ghbdz"
	I0912 21:48:51.049656       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-ghbdz" node="addons-331995"
	I0912 21:48:51.087220       7 controller.go:213] "Backend successfully reloaded"
	I0912 21:48:51.087501       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0912 21:48:51.087926       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-ghbdz", UID:"b658f2d0-8e52-4414-bf8c-81bcbd9a15bd", APIVersion:"v1", ResourceVersion:"1256", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [445dc44b267e] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[569963566]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (12-Sep-2024 21:47:09.877) (total time: 30028ms):
	Trace[569963566]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30018ms (21:47:39.895)
	Trace[569963566]: [30.02820962s] [30.02820962s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[961821547]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (12-Sep-2024 21:47:09.881) (total time: 30024ms):
	Trace[961821547]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30011ms (21:47:39.892)
	Trace[961821547]: [30.02467921s] [30.02467921s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.8:36998 - 22539 "A IN registry.kube-system.svc.cluster.local.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 99 false 512" NXDOMAIN qr,rd,ra 206 0.011019646s
	[INFO] 10.244.0.8:36998 - 26636 "AAAA IN registry.kube-system.svc.cluster.local.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 99 false 512" NXDOMAIN qr,rd,ra 206 0.020344759s
	[INFO] 10.244.0.8:34022 - 38422 "AAAA IN registry.kube-system.svc.cluster.local.c.p79a29526b6c1e63c-tp.internal. udp 88 false 512" NXDOMAIN qr,rd,ra 193 0.003910944s
	[INFO] 10.244.0.8:34022 - 48687 "A IN registry.kube-system.svc.cluster.local.c.p79a29526b6c1e63c-tp.internal. udp 88 false 512" NXDOMAIN qr,rd,ra 193 0.004250639s
	[INFO] 10.244.0.8:47605 - 39720 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000762771s
	[INFO] 10.244.0.8:47605 - 56878 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000734464s
	[INFO] 10.244.0.8:44780 - 46343 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000385778s
	[INFO] 10.244.0.8:44780 - 38669 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.001049043s
	[INFO] 10.244.0.8:40664 - 45404 "A IN registry.kube-system.svc.cluster.local.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 99 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000126875s
	[INFO] 10.244.0.8:40664 - 357 "AAAA IN registry.kube-system.svc.cluster.local.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 99 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000418748s
	[INFO] 10.244.0.26:49133 - 39508 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.003399423s
	[INFO] 10.244.0.26:38609 - 27818 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000231503s
	[INFO] 10.244.0.26:47855 - 40545 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000157429s
	[INFO] 10.244.0.26:59443 - 65496 "A IN storage.googleapis.com.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 190 0.00591248s
	[INFO] 10.244.0.26:41307 - 61273 "AAAA IN storage.googleapis.com.c.p79a29526b6c1e63c-tp.internal. udp 83 false 1232" NXDOMAIN qr,rd,ra 177 0.004982528s
	[INFO] 10.244.0.26:33650 - 48868 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004500354s
	
	
	==> coredns [868a9e54f79b] <==
	[INFO] 10.244.0.8:42231 - 34404 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000194802s
	[INFO] 10.244.0.8:59617 - 56571 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000151976s
	[INFO] 10.244.0.8:59617 - 34544 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000106499s
	[INFO] 10.244.0.8:48323 - 25306 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000171133s
	[INFO] 10.244.0.8:48323 - 63966 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000122441s
	[INFO] 10.244.0.8:42010 - 50345 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 177 0.004193851s
	[INFO] 10.244.0.8:42010 - 11431 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 177 0.006814802s
	[INFO] 10.244.0.8:54934 - 64830 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000245648s
	[INFO] 10.244.0.8:54934 - 11268 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000191038s
	[INFO] 10.244.0.8:34147 - 59879 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000265517s
	[INFO] 10.244.0.8:34147 - 4860 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000308605s
	[INFO] 10.244.0.8:41581 - 27020 "AAAA IN registry.kube-system.svc.cluster.local.c.p79a29526b6c1e63c-tp.internal. udp 88 false 512" NXDOMAIN qr,rd,ra 193 0.009223986s
	[INFO] 10.244.0.8:41581 - 25227 "A IN registry.kube-system.svc.cluster.local.c.p79a29526b6c1e63c-tp.internal. udp 88 false 512" NXDOMAIN qr,rd,ra 193 0.009824796s
	[INFO] 10.244.0.8:40590 - 10991 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000137334s
	[INFO] 10.244.0.8:40590 - 18666 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00016705s
	[INFO] 10.244.0.8:36099 - 12503 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000221759s
	[INFO] 10.244.0.8:36099 - 13011 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000436864s
	[INFO] 10.244.0.26:42816 - 4561 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000404391s
	[INFO] 10.244.0.26:34101 - 17948 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000201728s
	[INFO] 10.244.0.26:38541 - 13689 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00020108s
	[INFO] 10.244.0.26:54684 - 59487 "AAAA IN storage.googleapis.com.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 190 0.005249268s
	[INFO] 10.244.0.26:60239 - 56905 "A IN storage.googleapis.com.c.p79a29526b6c1e63c-tp.internal. udp 83 false 1232" NXDOMAIN qr,rd,ra 177 0.003733364s
	[INFO] 10.244.0.26:40520 - 60082 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005744176s
	[INFO] 10.244.0.26:36108 - 1839 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.003018103s
	[INFO] 10.244.0.26:52133 - 17789 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003163565s
	
	
	==> describe nodes <==
	Name:               addons-331995
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-331995
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=addons-331995
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T21_46_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-331995
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-331995"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:46:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-331995
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 21:53:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 21:50:33 +0000   Thu, 12 Sep 2024 21:46:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 21:50:33 +0000   Thu, 12 Sep 2024 21:46:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 21:50:33 +0000   Thu, 12 Sep 2024 21:46:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 21:50:33 +0000   Thu, 12 Sep 2024 21:46:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-331995
	Capacity:
	  cpu:                2
	  ephemeral-storage:  119475748Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             8141780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  119475748Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             8141780Ki
	  pods:               110
	System Info:
	  Machine ID:                 af769b9e892649a9a66756768ebde624
	  System UUID:                23a3c713-b4af-4c64-a638-7904dc1f2582
	  Boot ID:                    8d817c15-e3fc-48f0-8b3e-6ea4899766ef
	  Kernel Version:             6.1.100+
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-2cg5r     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  gadget                      gadget-qxkpk                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	  gcp-auth                    gcp-auth-89d5ffd79-zc25n                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-ghbdz    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         6m6s
	  kube-system                 coredns-7c65d6cfc9-6p998                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m30s
	  kube-system                 coredns-7c65d6cfc9-vhwzq                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m30s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 csi-hostpathplugin-ssw8n                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 etcd-addons-331995                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m36s
	  kube-system                 kube-apiserver-addons-331995                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 kube-controller-manager-addons-331995       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-proxy-9slnj                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-scheduler-addons-331995                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 metrics-server-84c5f94fbc-qj8c7             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m19s
	  kube-system                 nvidia-device-plugin-daemonset-4sqcf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 registry-66c9cd494c-6jhvv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 registry-proxy-rr7bm                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 snapshot-controller-56fcc65765-mjfwh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 snapshot-controller-56fcc65765-zbzzp        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 tiller-deploy-b48cc5f79-pxwc5               0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  local-path-storage          local-path-provisioner-86d989889c-hn7cz     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  volcano-system              volcano-admission-77d7d48b68-86lvv          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  volcano-system              volcano-controllers-56675bb4d5-2xcbp        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  volcano-system              volcano-scheduler-576bc46687-x2rvr          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-xqz4j              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     6m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  0 (0%)
	  memory             658Mi (8%)   596Mi (7%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m18s  kube-proxy       
	  Normal  Starting                 6m36s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m36s  kubelet          Node addons-331995 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m36s  kubelet          Node addons-331995 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m36s  kubelet          Node addons-331995 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m32s  node-controller  Node addons-331995 event: Registered Node addons-331995 in Controller
	
	
	==> dmesg <==
	[  +0.200616] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 96 22 1a a9 26 ba 08 06
	[  +3.148428] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 4a e9 56 e6 7d b7 08 06
	[  +7.499062] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 46 77 14 78 1e 58 08 06
	[  +7.560044] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff da 39 6b 69 f9 1a 08 06
	[  +4.103333] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 86 c0 55 13 a6 d7 08 06
	[  +0.158935] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 96 36 9d 3e c3 22 08 06
	[  +0.112368] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 55 e2 1d 59 35 08 06
	[Sep12 21:49] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 88 11 36 c1 00 08 06
	[  +0.074484] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ee f8 ca 58 92 32 08 06
	[Sep12 21:50] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 96 23 3b c0 b6 ea 08 06
	[  +0.001215] IPv4: martian source 10.244.0.26 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa 0c 72 97 a0 30 08 06
	[  +0.000731] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be b1 f1 17 08 40 08 06
	[Sep12 21:53] hrtimer: interrupt took 1181010 ns
	
	
	==> etcd [f9f9ebe55863] <==
	{"level":"warn","ts":"2024-09-12T21:48:43.302471Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-12T21:48:42.869603Z","time spent":"432.850326ms","remote":"127.0.0.1:44664","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1137,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-09-12T21:48:43.299094Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.505348ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-12T21:48:43.317331Z","caller":"traceutil/trace.go:171","msg":"trace[1045908749] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:0; response_revision:1228; }","duration":"172.742835ms","start":"2024-09-12T21:48:43.144567Z","end":"2024-09-12T21:48:43.317310Z","steps":["trace[1045908749] 'agreement among raft nodes before linearized reading'  (duration: 154.451308ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:48:43.479125Z","caller":"traceutil/trace.go:171","msg":"trace[256221663] linearizableReadLoop","detail":"{readStateIndex:1264; appliedIndex:1263; }","duration":"140.658075ms","start":"2024-09-12T21:48:43.338443Z","end":"2024-09-12T21:48:43.479101Z","steps":["trace[256221663] 'read index received'  (duration: 140.385827ms)","trace[256221663] 'applied index is now lower than readState.Index'  (duration: 270.494µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-12T21:48:43.479363Z","caller":"traceutil/trace.go:171","msg":"trace[1503960030] transaction","detail":"{read_only:false; response_revision:1229; number_of_response:1; }","duration":"159.638389ms","start":"2024-09-12T21:48:43.319712Z","end":"2024-09-12T21:48:43.479350Z","steps":["trace[1503960030] 'process raft request'  (duration: 159.188812ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:48:43.479831Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.365837ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/cloud-spanner-emulator-769b77f747-2cg5r\" ","response":"range_response_count:1 size:3423"}
	{"level":"info","ts":"2024-09-12T21:48:43.479917Z","caller":"traceutil/trace.go:171","msg":"trace[346855960] range","detail":"{range_begin:/registry/pods/default/cloud-spanner-emulator-769b77f747-2cg5r; range_end:; response_count:1; response_revision:1229; }","duration":"141.464472ms","start":"2024-09-12T21:48:43.338436Z","end":"2024-09-12T21:48:43.479900Z","steps":["trace[346855960] 'agreement among raft nodes before linearized reading'  (duration: 141.142206ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:48:43.490315Z","caller":"traceutil/trace.go:171","msg":"trace[1212267301] transaction","detail":"{read_only:false; response_revision:1230; number_of_response:1; }","duration":"149.325897ms","start":"2024-09-12T21:48:43.340967Z","end":"2024-09-12T21:48:43.490293Z","steps":["trace[1212267301] 'process raft request'  (duration: 148.530466ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:48:43.491557Z","caller":"traceutil/trace.go:171","msg":"trace[611982315] transaction","detail":"{read_only:false; response_revision:1231; number_of_response:1; }","duration":"138.954046ms","start":"2024-09-12T21:48:43.352585Z","end":"2024-09-12T21:48:43.491539Z","steps":["trace[611982315] 'process raft request'  (duration: 137.643923ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:48:43.492019Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.424574ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:48:43.497238Z","caller":"traceutil/trace.go:171","msg":"trace[823081930] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1231; }","duration":"144.755162ms","start":"2024-09-12T21:48:43.352445Z","end":"2024-09-12T21:48:43.497201Z","steps":["trace[823081930] 'agreement among raft nodes before linearized reading'  (duration: 139.389684ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:48:43.492816Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.312789ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:48:43.497686Z","caller":"traceutil/trace.go:171","msg":"trace[1654477385] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1231; }","duration":"145.191361ms","start":"2024-09-12T21:48:43.352480Z","end":"2024-09-12T21:48:43.497672Z","steps":["trace[1654477385] 'agreement among raft nodes before linearized reading'  (duration: 140.262687ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:48:43.492918Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.904378ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gcp-auth/gcp-auth-certs-patch.17f49d6fa8e6529d\" ","response":"range_response_count:1 size:913"}
	{"level":"info","ts":"2024-09-12T21:48:43.498107Z","caller":"traceutil/trace.go:171","msg":"trace[1941988053] range","detail":"{range_begin:/registry/events/gcp-auth/gcp-auth-certs-patch.17f49d6fa8e6529d; range_end:; response_count:1; response_revision:1231; }","duration":"137.088353ms","start":"2024-09-12T21:48:43.361003Z","end":"2024-09-12T21:48:43.498091Z","steps":["trace[1941988053] 'agreement among raft nodes before linearized reading'  (duration: 131.839143ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:48:44.828372Z","caller":"traceutil/trace.go:171","msg":"trace[1255160325] transaction","detail":"{read_only:false; response_revision:1237; number_of_response:1; }","duration":"130.254122ms","start":"2024-09-12T21:48:44.698094Z","end":"2024-09-12T21:48:44.828348Z","steps":["trace[1255160325] 'process raft request'  (duration: 130.10318ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:48:47.973672Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.626207ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:48:47.973782Z","caller":"traceutil/trace.go:171","msg":"trace[1103384014] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1249; }","duration":"144.712603ms","start":"2024-09-12T21:48:47.829016Z","end":"2024-09-12T21:48:47.973729Z","steps":["trace[1103384014] 'range keys from in-memory index tree'  (duration: 144.526239ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:48:47.973962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.814097ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:48:47.973990Z","caller":"traceutil/trace.go:171","msg":"trace[1223651583] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1249; }","duration":"140.844888ms","start":"2024-09-12T21:48:47.833137Z","end":"2024-09-12T21:48:47.973981Z","steps":["trace[1223651583] 'range keys from in-memory index tree'  (duration: 140.742495ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:48:48.308663Z","caller":"traceutil/trace.go:171","msg":"trace[860059594] linearizableReadLoop","detail":"{readStateIndex:1286; appliedIndex:1285; }","duration":"129.00134ms","start":"2024-09-12T21:48:48.179636Z","end":"2024-09-12T21:48:48.308637Z","steps":["trace[860059594] 'read index received'  (duration: 128.70953ms)","trace[860059594] 'applied index is now lower than readState.Index'  (duration: 290.355µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-12T21:48:48.310468Z","caller":"traceutil/trace.go:171","msg":"trace[473668203] transaction","detail":"{read_only:false; response_revision:1250; number_of_response:1; }","duration":"318.962413ms","start":"2024-09-12T21:48:47.991450Z","end":"2024-09-12T21:48:48.310413Z","steps":["trace[473668203] 'process raft request'  (duration: 316.95513ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:48:48.310664Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.968423ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:48:48.311892Z","caller":"traceutil/trace.go:171","msg":"trace[46603671] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1250; }","duration":"132.243511ms","start":"2024-09-12T21:48:48.179629Z","end":"2024-09-12T21:48:48.311872Z","steps":["trace[46603671] 'agreement among raft nodes before linearized reading'  (duration: 130.934072ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:48:48.313904Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-12T21:48:47.991426Z","time spent":"320.358775ms","remote":"127.0.0.1:44968","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3358,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/validatingwebhookconfigurations/volcano-admission-service-queues-validate\" mod_revision:882 > success:<request_put:<key:\"/registry/validatingwebhookconfigurations/volcano-admission-service-queues-validate\" value_size:3267 >> failure:<request_range:<key:\"/registry/validatingwebhookconfigurations/volcano-admission-service-queues-validate\" > >"}
	
	
	==> gcp-auth [cd2d86123e20] <==
	2024/09/12 21:50:09 GCP Auth Webhook started!
	2024/09/12 21:50:29 Ready to marshal response ...
	2024/09/12 21:50:29 Ready to write response ...
	2024/09/12 21:50:30 Ready to marshal response ...
	2024/09/12 21:50:30 Ready to write response ...
	
	
	==> kernel <==
	 21:53:34 up 51 min,  0 users,  load average: 1.20, 1.98, 1.91
	Linux addons-331995 6.1.100+ #1 SMP PREEMPT_DYNAMIC Sat Aug 17 14:12:26 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [b880a4debed4] <==
	W0912 21:48:37.019178       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.144.37:443: connect: connection refused
	W0912 21:48:38.055686       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.144.37:443: connect: connection refused
	W0912 21:48:39.095312       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.144.37:443: connect: connection refused
	W0912 21:48:40.182216       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.144.37:443: connect: connection refused
	W0912 21:48:41.245676       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.144.37:443: connect: connection refused
	W0912 21:48:42.362589       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.144.37:443: connect: connection refused
	W0912 21:48:43.348216       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.53.96:443: connect: connection refused
	E0912 21:48:43.348539       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.53.96:443: connect: connection refused" logger="UnhandledError"
	W0912 21:48:43.348967       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.53.96:443: connect: connection refused
	E0912 21:48:43.349213       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.53.96:443: connect: connection refused" logger="UnhandledError"
	W0912 21:48:43.353325       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.103.144.37:443: connect: connection refused
	W0912 21:48:43.353819       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.103.144.37:443: connect: connection refused
	W0912 21:48:43.416882       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.144.37:443: connect: connection refused
	W0912 21:48:44.505308       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.144.37:443: connect: connection refused
	W0912 21:48:45.583931       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.144.37:443: connect: connection refused
	W0912 21:48:46.696285       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.144.37:443: connect: connection refused
	W0912 21:48:47.753383       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.144.37:443: connect: connection refused
	W0912 21:49:02.281916       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.53.96:443: connect: connection refused
	E0912 21:49:02.282013       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.53.96:443: connect: connection refused" logger="UnhandledError"
	W0912 21:49:43.366734       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.53.96:443: connect: connection refused
	E0912 21:49:43.367200       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.53.96:443: connect: connection refused" logger="UnhandledError"
	W0912 21:49:43.367042       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.53.96:443: connect: connection refused
	E0912 21:49:43.367263       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.53.96:443: connect: connection refused" logger="UnhandledError"
	I0912 21:50:29.757823       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0912 21:50:29.795075       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [29fcb0173534] <==
	I0912 21:49:43.396968       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0912 21:49:43.407079       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0912 21:49:43.414458       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0912 21:49:43.424389       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0912 21:49:43.431797       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0912 21:49:43.448111       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0912 21:49:43.471078       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0912 21:49:45.707619       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0912 21:49:45.725771       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0912 21:49:47.025534       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0912 21:49:47.043127       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0912 21:49:48.031865       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0912 21:49:48.044511       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0912 21:49:48.053487       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0912 21:49:48.056406       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0912 21:49:48.069020       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0912 21:49:48.078808       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0912 21:50:10.560336       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="22.770317ms"
	I0912 21:50:10.560472       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="87.383µs"
	I0912 21:50:18.024597       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0912 21:50:18.028070       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0912 21:50:18.098109       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0912 21:50:18.109157       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0912 21:50:29.350765       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	I0912 21:50:33.538382       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-331995"
	
	
	==> kube-proxy [7af155869329] <==
	I0912 21:47:13.680709       1 server_linux.go:66] "Using iptables proxy"
	I0912 21:47:15.421561       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0912 21:47:15.425219       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 21:47:15.916170       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0912 21:47:15.916845       1 server_linux.go:169] "Using iptables Proxier"
	I0912 21:47:15.929120       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 21:47:15.934176       1 server.go:483] "Version info" version="v1.31.1"
	I0912 21:47:15.934221       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 21:47:16.042446       1 config.go:199] "Starting service config controller"
	I0912 21:47:16.044225       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 21:47:16.048726       1 config.go:105] "Starting endpoint slice config controller"
	I0912 21:47:16.049131       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 21:47:16.071330       1 config.go:328] "Starting node config controller"
	I0912 21:47:16.075347       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 21:47:16.237394       1 shared_informer.go:320] Caches are synced for service config
	I0912 21:47:16.237740       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 21:47:16.277926       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [86c19ec99eff] <==
	W0912 21:46:55.413078       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 21:46:55.414590       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0912 21:46:56.256437       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 21:46:56.256500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:46:56.327532       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 21:46:56.327695       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:46:56.374523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 21:46:56.374892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:46:56.436424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 21:46:56.436567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:46:56.495087       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 21:46:56.497161       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:46:56.546386       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0912 21:46:56.546767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:46:56.659601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0912 21:46:56.661326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:46:56.661228       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 21:46:56.662207       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0912 21:46:56.674254       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 21:46:56.674912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:46:56.713837       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 21:46:56.714149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:46:56.738998       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 21:46:56.739498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0912 21:46:59.566151       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 12 21:51:28 addons-331995 kubelet[2185]: E0912 21:51:28.096817    2185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-qxkpk_gadget(f66e77ac-0168-4c48-9d46-ce276ef98484)\"" pod="gadget/gadget-qxkpk" podUID="f66e77ac-0168-4c48-9d46-ce276ef98484"
	Sep 12 21:51:42 addons-331995 kubelet[2185]: I0912 21:51:42.095160    2185 scope.go:117] "RemoveContainer" containerID="60abc06ed7138bc759081ecec8f589729e0890e48de2290f1d544de672cce469"
	Sep 12 21:51:42 addons-331995 kubelet[2185]: E0912 21:51:42.095369    2185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-qxkpk_gadget(f66e77ac-0168-4c48-9d46-ce276ef98484)\"" pod="gadget/gadget-qxkpk" podUID="f66e77ac-0168-4c48-9d46-ce276ef98484"
	Sep 12 21:51:56 addons-331995 kubelet[2185]: I0912 21:51:56.094850    2185 scope.go:117] "RemoveContainer" containerID="60abc06ed7138bc759081ecec8f589729e0890e48de2290f1d544de672cce469"
	Sep 12 21:51:58 addons-331995 kubelet[2185]: I0912 21:51:58.438402    2185 scope.go:117] "RemoveContainer" containerID="60abc06ed7138bc759081ecec8f589729e0890e48de2290f1d544de672cce469"
	Sep 12 21:51:58 addons-331995 kubelet[2185]: I0912 21:51:58.771676    2185 scope.go:117] "RemoveContainer" containerID="5814a023664fc24ce6516b6abfa209be4f2df66875a6d3631de3081fdd854363"
	Sep 12 21:51:58 addons-331995 kubelet[2185]: E0912 21:51:58.772350    2185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-qxkpk_gadget(f66e77ac-0168-4c48-9d46-ce276ef98484)\"" pod="gadget/gadget-qxkpk" podUID="f66e77ac-0168-4c48-9d46-ce276ef98484"
	Sep 12 21:51:59 addons-331995 kubelet[2185]: I0912 21:51:59.095110    2185 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-rr7bm" secret="" err="secret \"gcp-auth\" not found"
	Sep 12 21:51:59 addons-331995 kubelet[2185]: I0912 21:51:59.848800    2185 scope.go:117] "RemoveContainer" containerID="5814a023664fc24ce6516b6abfa209be4f2df66875a6d3631de3081fdd854363"
	Sep 12 21:51:59 addons-331995 kubelet[2185]: E0912 21:51:59.849129    2185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-qxkpk_gadget(f66e77ac-0168-4c48-9d46-ce276ef98484)\"" pod="gadget/gadget-qxkpk" podUID="f66e77ac-0168-4c48-9d46-ce276ef98484"
	Sep 12 21:52:12 addons-331995 kubelet[2185]: I0912 21:52:12.095370    2185 scope.go:117] "RemoveContainer" containerID="5814a023664fc24ce6516b6abfa209be4f2df66875a6d3631de3081fdd854363"
	Sep 12 21:52:12 addons-331995 kubelet[2185]: E0912 21:52:12.095633    2185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-qxkpk_gadget(f66e77ac-0168-4c48-9d46-ce276ef98484)\"" pod="gadget/gadget-qxkpk" podUID="f66e77ac-0168-4c48-9d46-ce276ef98484"
	Sep 12 21:52:23 addons-331995 kubelet[2185]: I0912 21:52:23.095201    2185 scope.go:117] "RemoveContainer" containerID="5814a023664fc24ce6516b6abfa209be4f2df66875a6d3631de3081fdd854363"
	Sep 12 21:52:23 addons-331995 kubelet[2185]: E0912 21:52:23.095422    2185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-qxkpk_gadget(f66e77ac-0168-4c48-9d46-ce276ef98484)\"" pod="gadget/gadget-qxkpk" podUID="f66e77ac-0168-4c48-9d46-ce276ef98484"
	Sep 12 21:52:35 addons-331995 kubelet[2185]: I0912 21:52:35.094268    2185 scope.go:117] "RemoveContainer" containerID="5814a023664fc24ce6516b6abfa209be4f2df66875a6d3631de3081fdd854363"
	Sep 12 21:52:35 addons-331995 kubelet[2185]: E0912 21:52:35.094593    2185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-qxkpk_gadget(f66e77ac-0168-4c48-9d46-ce276ef98484)\"" pod="gadget/gadget-qxkpk" podUID="f66e77ac-0168-4c48-9d46-ce276ef98484"
	Sep 12 21:52:47 addons-331995 kubelet[2185]: I0912 21:52:47.094575    2185 scope.go:117] "RemoveContainer" containerID="5814a023664fc24ce6516b6abfa209be4f2df66875a6d3631de3081fdd854363"
	Sep 12 21:52:47 addons-331995 kubelet[2185]: E0912 21:52:47.094904    2185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-qxkpk_gadget(f66e77ac-0168-4c48-9d46-ce276ef98484)\"" pod="gadget/gadget-qxkpk" podUID="f66e77ac-0168-4c48-9d46-ce276ef98484"
	Sep 12 21:52:58 addons-331995 kubelet[2185]: I0912 21:52:58.096238    2185 scope.go:117] "RemoveContainer" containerID="5814a023664fc24ce6516b6abfa209be4f2df66875a6d3631de3081fdd854363"
	Sep 12 21:52:58 addons-331995 kubelet[2185]: E0912 21:52:58.096586    2185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-qxkpk_gadget(f66e77ac-0168-4c48-9d46-ce276ef98484)\"" pod="gadget/gadget-qxkpk" podUID="f66e77ac-0168-4c48-9d46-ce276ef98484"
	Sep 12 21:53:09 addons-331995 kubelet[2185]: I0912 21:53:09.094199    2185 scope.go:117] "RemoveContainer" containerID="5814a023664fc24ce6516b6abfa209be4f2df66875a6d3631de3081fdd854363"
	Sep 12 21:53:09 addons-331995 kubelet[2185]: E0912 21:53:09.094557    2185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-qxkpk_gadget(f66e77ac-0168-4c48-9d46-ce276ef98484)\"" pod="gadget/gadget-qxkpk" podUID="f66e77ac-0168-4c48-9d46-ce276ef98484"
	Sep 12 21:53:20 addons-331995 kubelet[2185]: I0912 21:53:20.094726    2185 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-rr7bm" secret="" err="secret \"gcp-auth\" not found"
	Sep 12 21:53:23 addons-331995 kubelet[2185]: I0912 21:53:23.094698    2185 scope.go:117] "RemoveContainer" containerID="5814a023664fc24ce6516b6abfa209be4f2df66875a6d3631de3081fdd854363"
	Sep 12 21:53:23 addons-331995 kubelet[2185]: E0912 21:53:23.095166    2185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-qxkpk_gadget(f66e77ac-0168-4c48-9d46-ce276ef98484)\"" pod="gadget/gadget-qxkpk" podUID="f66e77ac-0168-4c48-9d46-ce276ef98484"
	
	
	==> storage-provisioner [f30d0ccccc62] <==
	I0912 21:47:21.810825       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 21:47:22.274647       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 21:47:22.275065       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 21:47:22.690583       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 21:47:22.691995       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-331995_452362f9-16f2-495f-ae00-4487175040a7!
	I0912 21:47:22.763502       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ded5eb6c-7290-49d3-bbc7-91676b62b5b7", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-331995_452362f9-16f2-495f-ae00-4487175040a7 became leader
	I0912 21:47:23.306677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-331995_452362f9-16f2-495f-ae00-4487175040a7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-331995 -n addons-331995
helpers_test.go:261: (dbg) Run:  kubectl --context addons-331995 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-62fw7 ingress-nginx-admission-patch-vjgh7 test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-331995 describe pod ingress-nginx-admission-create-62fw7 ingress-nginx-admission-patch-vjgh7 test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-331995 describe pod ingress-nginx-admission-create-62fw7 ingress-nginx-admission-patch-vjgh7 test-job-nginx-0: exit status 1 (176.949867ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-62fw7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vjgh7" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-331995 describe pod ingress-nginx-admission-create-62fw7 ingress-nginx-admission-patch-vjgh7 test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (204.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (77.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 5.052194ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-6jhvv" [07e442c7-a078-4cde-aa3c-fad57aac4c18] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.014230581s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rr7bm" [c0051f59-18dc-4684-b682-f4a992ea12a2] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006309339s
addons_test.go:342: (dbg) Run:  kubectl --context addons-331995 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-331995 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-331995 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.150292774s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-331995 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-331995 ip
2024/09/12 22:02:52 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-331995 addons disable registry --alsologtostderr -v=1
addons_test.go:390: (dbg) Done: out/minikube-linux-amd64 -p addons-331995 addons disable registry --alsologtostderr -v=1: (1.027702609s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-331995
helpers_test.go:235: (dbg) docker inspect addons-331995:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "19a13011e6679d2c63d2a96db045a63005478ea2bb59e4bb58ee3bc2b2c1ce1e",
	        "Created": "2024-09-12T21:46:36.531837396Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 70419,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-12T21:46:36.712670886Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1e046fff9d873d0625e7bcc757c3514a16d475711d13430b9690fa498decc3a8",
	        "ResolvConfPath": "/var/lib/docker/containers/19a13011e6679d2c63d2a96db045a63005478ea2bb59e4bb58ee3bc2b2c1ce1e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/19a13011e6679d2c63d2a96db045a63005478ea2bb59e4bb58ee3bc2b2c1ce1e/hostname",
	        "HostsPath": "/var/lib/docker/containers/19a13011e6679d2c63d2a96db045a63005478ea2bb59e4bb58ee3bc2b2c1ce1e/hosts",
	        "LogPath": "/var/lib/docker/containers/19a13011e6679d2c63d2a96db045a63005478ea2bb59e4bb58ee3bc2b2c1ce1e/19a13011e6679d2c63d2a96db045a63005478ea2bb59e4bb58ee3bc2b2c1ce1e-json.log",
	        "Name": "/addons-331995",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-331995:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-331995",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/61dd1faf9b906dd84b392c130a39fe08e6205e8c85a9a511120f47e26a6f4c51-init/diff:/var/lib/docker/overlay2/ffdf788bdf1d1cdb120030b71e5081c18b78a7cda19c1d5699c3f05321eeb2ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/61dd1faf9b906dd84b392c130a39fe08e6205e8c85a9a511120f47e26a6f4c51/merged",
	                "UpperDir": "/var/lib/docker/overlay2/61dd1faf9b906dd84b392c130a39fe08e6205e8c85a9a511120f47e26a6f4c51/diff",
	                "WorkDir": "/var/lib/docker/overlay2/61dd1faf9b906dd84b392c130a39fe08e6205e8c85a9a511120f47e26a6f4c51/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-331995",
	                "Source": "/var/lib/docker/volumes/addons-331995/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-331995",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-331995",
	                "name.minikube.sigs.k8s.io": "addons-331995",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0b4a752d18a6cbc55187cb58233f7a5aeaa2acef8b04f85ba70a4cd819fd59ae",
	            "SandboxKey": "/var/run/docker/netns/0b4a752d18a6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-331995": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "f1bea63ba7151dba9450c5ddc2a6e5ac868361c31512e383ce228a7ec5e8dc78",
	                    "EndpointID": "96789eb05de757d0cc4124481d4cc051b9af8faa180457cfbd87b3e8bbbc5cab",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-331995",
	                        "19a13011e667"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-331995 -n addons-331995
helpers_test.go:239: (dbg) Done: out/minikube-linux-amd64 status --format={{.Host}} -p addons-331995 -n addons-331995: (1.136368014s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-331995 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-331995 logs -n 25: (2.484891047s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|---------------|-----------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |    Profile    |         User          | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|---------------|-----------------------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p                  | addons-331995 | g528047478195_compute | v1.34.0 | 12 Sep 24 21:45 UTC |                     |
	|         | addons-331995                        |               |                       |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-331995 | g528047478195_compute | v1.34.0 | 12 Sep 24 21:45 UTC |                     |
	|         | addons-331995                        |               |                       |         |                     |                     |
	| start   | -p addons-331995 --wait=true         | addons-331995 | g528047478195_compute | v1.34.0 | 12 Sep 24 21:45 UTC | 12 Sep 24 21:50 UTC |
	|         | --memory=4000 --alsologtostderr      |               |                       |         |                     |                     |
	|         | --addons=registry                    |               |                       |         |                     |                     |
	|         | --addons=metrics-server              |               |                       |         |                     |                     |
	|         | --addons=volumesnapshots             |               |                       |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |               |                       |         |                     |                     |
	|         | --addons=gcp-auth                    |               |                       |         |                     |                     |
	|         | --addons=cloud-spanner               |               |                       |         |                     |                     |
	|         | --addons=inspektor-gadget            |               |                       |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |               |                       |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |               |                       |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |               |                       |         |                     |                     |
	|         | --driver=docker                      |               |                       |         |                     |                     |
	|         | --container-runtime=docker           |               |                       |         |                     |                     |
	|         | --addons=ingress                     |               |                       |         |                     |                     |
	|         | --addons=ingress-dns                 |               |                       |         |                     |                     |
	|         | --addons=helm-tiller                 |               |                       |         |                     |                     |
	| addons  | addons-331995 addons                 | addons-331995 | g528047478195_compute | v1.34.0 | 12 Sep 24 22:02 UTC | 12 Sep 24 22:02 UTC |
	|         | disable csi-hostpath-driver          |               |                       |         |                     |                     |
	|         | --alsologtostderr -v=1               |               |                       |         |                     |                     |
	| addons  | addons-331995 addons                 | addons-331995 | g528047478195_compute | v1.34.0 | 12 Sep 24 22:02 UTC | 12 Sep 24 22:02 UTC |
	|         | disable volumesnapshots              |               |                       |         |                     |                     |
	|         | --alsologtostderr -v=1               |               |                       |         |                     |                     |
	| addons  | addons-331995 addons disable         | addons-331995 | g528047478195_compute | v1.34.0 | 12 Sep 24 22:02 UTC | 12 Sep 24 22:02 UTC |
	|         | helm-tiller --alsologtostderr        |               |                       |         |                     |                     |
	|         | -v=1                                 |               |                       |         |                     |                     |
	| ip      | addons-331995 ip                     | addons-331995 | g528047478195_compute | v1.34.0 | 12 Sep 24 22:02 UTC | 12 Sep 24 22:02 UTC |
	| addons  | addons-331995 addons disable         | addons-331995 | g528047478195_compute | v1.34.0 | 12 Sep 24 22:02 UTC | 12 Sep 24 22:02 UTC |
	|         | registry --alsologtostderr           |               |                       |         |                     |                     |
	|         | -v=1                                 |               |                       |         |                     |                     |
	| addons  | addons-331995 addons                 | addons-331995 | g528047478195_compute | v1.34.0 | 12 Sep 24 22:02 UTC |                     |
	|         | disable metrics-server               |               |                       |         |                     |                     |
	|         | --alsologtostderr -v=1               |               |                       |         |                     |                     |
	|---------|--------------------------------------|---------------|-----------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 21:45:49
	Running on machine: cs-905301410258-default
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:45:49.078751   69940 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:45:49.078936   69940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:45:49.078947   69940 out.go:358] Setting ErrFile to fd 2...
	I0912 21:45:49.078957   69940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:45:49.079277   69940 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/bin
	W0912 21:45:49.079618   69940 root.go:314] Error reading config file at /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/config/config.json: open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/config/config.json: no such file or directory
	I0912 21:45:49.080289   69940 out.go:352] Setting JSON to false
	I0912 21:45:49.081284   69940 start.go:129] hostinfo: {"hostname":"cs-905301410258-default","uptime":2650,"bootTime":1726174899,"procs":20,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.1.100+","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"guest","hostId":"88b15d6b-fddc-40bb-b1ad-a90cb2566e38"}
	I0912 21:45:49.081362   69940 start.go:139] virtualization:  guest
	I0912 21:45:49.085982   69940 out.go:177] * [addons-331995] minikube v1.34.0 on Ubuntu 22.04 (amd64)
	W0912 21:45:49.089793   69940 preload.go:293] Failed to list preload files: open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/cache/preloaded-tarball: no such file or directory
	I0912 21:45:49.089851   69940 notify.go:220] Checking for updates...
	I0912 21:45:49.089936   69940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 21:45:49.093369   69940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:45:49.096802   69940 out.go:177]   - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19616-63719/kubeconfig
	I0912 21:45:49.100414   69940 out.go:177]   - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19616-63719/.minikube
	I0912 21:45:49.104961   69940 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 21:45:49.108291   69940 out.go:177]   - MINIKUBE_WANTUPDATENOTIFICATION=false
	I0912 21:45:49.112078   69940 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:45:49.157984   69940 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0912 21:45:49.158296   69940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:45:49.266446   69940 info.go:266] docker info: {ID:cc2c2805-45ae-4725-9955-34f6536c4026 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:false NGoroutines:55 SystemTime:2024-09-12 21:45:49.247377546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 21:45:49.266659   69940 docker.go:318] overlay module found
	I0912 21:45:49.270367   69940 out.go:177] * Using the docker driver based on user configuration
	I0912 21:45:49.273772   69940 start.go:297] selected driver: docker
	I0912 21:45:49.273834   69940 start.go:901] validating driver "docker" against <nil>
	I0912 21:45:49.273861   69940 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 21:45:49.274759   69940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:45:49.376710   69940 info.go:266] docker info: {ID:cc2c2805-45ae-4725-9955-34f6536c4026 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:false NGoroutines:55 SystemTime:2024-09-12 21:45:49.359923636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 21:45:49.376966   69940 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 21:45:49.377403   69940 start_flags.go:421] setting extra-config: kubelet.cgroups-per-qos=false
	I0912 21:45:49.377429   69940 start_flags.go:421] setting extra-config: kubelet.enforce-node-allocatable=""
	I0912 21:45:49.377487   69940 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:45:49.381228   69940 out.go:177] * Using Docker driver with root privileges
	I0912 21:45:49.385116   69940 cni.go:84] Creating CNI manager for ""
	I0912 21:45:49.385169   69940 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 21:45:49.385204   69940 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 21:45:49.385345   69940 start.go:340] cluster config:
	{Name:addons-331995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-331995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:45:49.388847   69940 out.go:177] * Starting "addons-331995" primary control-plane node in "addons-331995" cluster
	I0912 21:45:49.391841   69940 cache.go:121] Beginning downloading kic base image for docker with docker
	I0912 21:45:49.395280   69940 out.go:177] * Pulling base image v0.0.45-1726156396-19616 ...
	I0912 21:45:49.398316   69940 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 21:45:49.398465   69940 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local docker daemon
	I0912 21:45:49.425373   69940 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 to local cache
	I0912 21:45:49.425838   69940 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local cache directory
	I0912 21:45:49.425995   69940 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 to local cache
	I0912 21:45:49.428953   69940 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0912 21:45:49.428985   69940 cache.go:56] Caching tarball of preloaded images
	I0912 21:45:49.429477   69940 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 21:45:49.434439   69940 out.go:177] * Downloading Kubernetes v1.31.1 preload ...
	I0912 21:45:49.438083   69940 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0912 21:45:49.471182   69940 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0912 21:45:52.666899   69940 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0912 21:45:52.667173   69940 preload.go:254] verifying checksum of /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0912 21:45:54.062020   69940 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 21:45:54.062551   69940 profile.go:143] Saving config to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/config.json ...
	I0912 21:45:54.062607   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/config.json: {Name:mkb3372d6a177aebf5f7ec207cfe88817f7c5bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:45:57.961944   69940 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 as a tarball
	I0912 21:45:57.961968   69940 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 from local cache
	I0912 21:46:23.658165   69940 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 from cached tarball
	I0912 21:46:23.658218   69940 cache.go:194] Successfully downloaded all kic artifacts
	I0912 21:46:23.658296   69940 start.go:360] acquireMachinesLock for addons-331995: {Name:mk84494dea4fc95748971c805f99ee1b550f8b25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:46:23.658642   69940 start.go:364] duration metric: took 311.509µs to acquireMachinesLock for "addons-331995"
	I0912 21:46:23.658702   69940 start.go:93] Provisioning new machine with config: &{Name:addons-331995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-331995 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 21:46:23.658863   69940 start.go:125] createHost starting for "" (driver="docker")
	I0912 21:46:23.663377   69940 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0912 21:46:23.663825   69940 start.go:159] libmachine.API.Create for "addons-331995" (driver="docker")
	I0912 21:46:23.663871   69940 client.go:168] LocalClient.Create starting
	I0912 21:46:23.664031   69940 main.go:141] libmachine: Creating CA: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/ca.pem
	I0912 21:46:23.788484   69940 main.go:141] libmachine: Creating client certificate: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/cert.pem
	I0912 21:46:24.213669   69940 cli_runner.go:164] Run: docker network inspect addons-331995 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0912 21:46:24.239933   69940 cli_runner.go:211] docker network inspect addons-331995 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0912 21:46:24.240140   69940 network_create.go:284] running [docker network inspect addons-331995] to gather additional debugging logs...
	I0912 21:46:24.240277   69940 cli_runner.go:164] Run: docker network inspect addons-331995
	W0912 21:46:24.264508   69940 cli_runner.go:211] docker network inspect addons-331995 returned with exit code 1
	I0912 21:46:24.264627   69940 network_create.go:287] error running [docker network inspect addons-331995]: docker network inspect addons-331995: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-331995 not found
	I0912 21:46:24.264656   69940 network_create.go:289] output of [docker network inspect addons-331995]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-331995 not found
	
	** /stderr **
	I0912 21:46:24.264830   69940 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 21:46:24.292427   69940 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc016ce2980}
	I0912 21:46:24.292498   69940 network_create.go:124] attempt to create docker network addons-331995 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1460 ...
	I0912 21:46:24.292619   69940 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1460 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-331995 addons-331995
	I0912 21:46:24.398856   69940 network_create.go:108] docker network addons-331995 192.168.49.0/24 created
	I0912 21:46:24.398905   69940 kic.go:121] calculated static IP "192.168.49.2" for the "addons-331995" container
	I0912 21:46:24.399091   69940 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0912 21:46:24.424907   69940 cli_runner.go:164] Run: docker volume create addons-331995 --label name.minikube.sigs.k8s.io=addons-331995 --label created_by.minikube.sigs.k8s.io=true
	I0912 21:46:24.454817   69940 oci.go:103] Successfully created a docker volume addons-331995
	I0912 21:46:24.454997   69940 cli_runner.go:164] Run: docker run --rm --name addons-331995-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-331995 --entrypoint /usr/bin/test -v addons-331995:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -d /var/lib
	I0912 21:46:28.539745   69940 cli_runner.go:217] Completed: docker run --rm --name addons-331995-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-331995 --entrypoint /usr/bin/test -v addons-331995:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -d /var/lib: (4.0846871s)
	I0912 21:46:28.539790   69940 oci.go:107] Successfully prepared a docker volume addons-331995
	I0912 21:46:28.539820   69940 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 21:46:28.539853   69940 kic.go:194] Starting extracting preloaded images to volume ...
	I0912 21:46:28.539991   69940 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-331995:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -I lz4 -xf /preloaded.tar -C /extractDir
	I0912 21:46:36.409576   69940 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-331995:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -I lz4 -xf /preloaded.tar -C /extractDir: (7.869507519s)
	I0912 21:46:36.409624   69940 kic.go:203] duration metric: took 7.869767004s to extract preloaded images to volume ...
	W0912 21:46:36.409752   69940 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0912 21:46:36.409820   69940 oci.go:243] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0912 21:46:36.409914   69940 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0912 21:46:36.504599   69940 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-331995 --name addons-331995 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-331995 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-331995 --network addons-331995 --ip 192.168.49.2 --volume addons-331995:/var --security-opt apparmor=unconfined --memory=4000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889
	I0912 21:46:36.938910   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Running}}
	I0912 21:46:36.985522   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:46:37.032879   69940 cli_runner.go:164] Run: docker exec addons-331995 stat /var/lib/dpkg/alternatives/iptables
	I0912 21:46:37.150453   69940 oci.go:144] the created container "addons-331995" has a running status.
	I0912 21:46:37.150493   69940 kic.go:225] Creating ssh key for kic: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa...
	I0912 21:46:37.682434   69940 kic_runner.go:191] docker (temp): /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0912 21:46:37.774300   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:46:37.846181   69940 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0912 21:46:37.846234   69940 kic_runner.go:114] Args: [docker exec --privileged addons-331995 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0912 21:46:38.047377   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:46:38.100752   69940 machine.go:93] provisionDockerMachine start ...
	I0912 21:46:38.100937   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:38.149661   69940 main.go:141] libmachine: Using SSH client type: native
	I0912 21:46:38.150009   69940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0912 21:46:38.150027   69940 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 21:46:38.355282   69940 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-331995
	
	I0912 21:46:38.355313   69940 ubuntu.go:169] provisioning hostname "addons-331995"
	I0912 21:46:38.355434   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:38.393288   69940 main.go:141] libmachine: Using SSH client type: native
	I0912 21:46:38.393672   69940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0912 21:46:38.393703   69940 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-331995 && echo "addons-331995" | sudo tee /etc/hostname
	I0912 21:46:38.590768   69940 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-331995
	
	I0912 21:46:38.590910   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:38.633281   69940 main.go:141] libmachine: Using SSH client type: native
	I0912 21:46:38.633628   69940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0912 21:46:38.633661   69940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-331995' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-331995/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-331995' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 21:46:38.786104   69940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:46:38.786140   69940 ubuntu.go:175] set auth options {CertDir:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube CaCertPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/ca.pem CaPrivateKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/server.pem ServerKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/server-key.pem ClientKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube}
	I0912 21:46:38.786174   69940 ubuntu.go:177] setting up certificates
	I0912 21:46:38.786192   69940 provision.go:84] configureAuth start
	I0912 21:46:38.786332   69940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-331995
	I0912 21:46:38.824555   69940 provision.go:143] copyHostCerts
	I0912 21:46:38.824693   69940 exec_runner.go:151] cp: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/ca.pem --> /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/ca.pem (1119 bytes)
	I0912 21:46:38.824922   69940 exec_runner.go:151] cp: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/cert.pem --> /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/cert.pem (1164 bytes)
	I0912 21:46:38.825127   69940 exec_runner.go:151] cp: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/key.pem --> /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/key.pem (1679 bytes)
	I0912 21:46:38.825261   69940 provision.go:117] generating server cert: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/server.pem ca-key=/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/ca.pem private-key=/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/ca-key.pem org=g528047478195_compute.addons-331995 san=[127.0.0.1 192.168.49.2 addons-331995 localhost minikube]
	I0912 21:46:38.900500   69940 provision.go:177] copyRemoteCerts
	I0912 21:46:38.900622   69940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 21:46:38.900707   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:38.927613   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:46:39.029653   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 21:46:39.071478   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1119 bytes)
	I0912 21:46:39.117968   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0912 21:46:39.157603   69940 provision.go:87] duration metric: took 371.391849ms to configureAuth
	I0912 21:46:39.157709   69940 ubuntu.go:193] setting minikube options for container-runtime
	I0912 21:46:39.158123   69940 config.go:182] Loaded profile config "addons-331995": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 21:46:39.158263   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:39.188586   69940 main.go:141] libmachine: Using SSH client type: native
	I0912 21:46:39.188900   69940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0912 21:46:39.188926   69940 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0912 21:46:39.327914   69940 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0912 21:46:39.328041   69940 ubuntu.go:71] root file system type: overlay
	I0912 21:46:39.328287   69940 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0912 21:46:39.328481   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:39.357575   69940 main.go:141] libmachine: Using SSH client type: native
	I0912 21:46:39.357899   69940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0912 21:46:39.358027   69940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0912 21:46:39.517145   69940 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0912 21:46:39.517307   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:39.547456   69940 main.go:141] libmachine: Using SSH client type: native
	I0912 21:46:39.547854   69940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8375c0] 0x83a320 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0912 21:46:39.547892   69940 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0912 21:46:40.733033   69940 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:41.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-12 21:46:39.513858772 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0912 21:46:40.733173   69940 machine.go:96] duration metric: took 2.632387975s to provisionDockerMachine
	I0912 21:46:40.733193   69940 client.go:171] duration metric: took 17.069311471s to LocalClient.Create
	I0912 21:46:40.733219   69940 start.go:167] duration metric: took 17.069398477s to libmachine.API.Create "addons-331995"
	I0912 21:46:40.733235   69940 start.go:293] postStartSetup for "addons-331995" (driver="docker")
	I0912 21:46:40.733255   69940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 21:46:40.733385   69940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 21:46:40.733478   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:40.765655   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:46:40.869352   69940 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 21:46:40.875276   69940 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0912 21:46:40.875344   69940 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0912 21:46:40.875362   69940 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0912 21:46:40.875391   69940 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0912 21:46:40.875417   69940 filesync.go:126] Scanning /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/addons for local assets ...
	I0912 21:46:40.875520   69940 filesync.go:126] Scanning /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/files for local assets ...
	I0912 21:46:40.875569   69940 start.go:296] duration metric: took 142.323964ms for postStartSetup
	I0912 21:46:40.876187   69940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-331995
	I0912 21:46:40.904482   69940 profile.go:143] Saving config to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/config.json ...
	I0912 21:46:40.904998   69940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 21:46:40.905154   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:40.942457   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:46:41.037221   69940 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0912 21:46:41.045006   69940 start.go:128] duration metric: took 17.386113745s to createHost
	I0912 21:46:41.045089   69940 start.go:83] releasing machines lock for "addons-331995", held for 17.386422281s
	I0912 21:46:41.045318   69940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-331995
	I0912 21:46:41.073837   69940 ssh_runner.go:195] Run: cat /version.json
	I0912 21:46:41.073856   69940 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 21:46:41.073940   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:41.073971   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:46:41.119499   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:46:41.120558   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:46:41.327179   69940 ssh_runner.go:195] Run: systemctl --version
	I0912 21:46:41.335113   69940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 21:46:41.342795   69940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0912 21:46:41.386000   69940 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0912 21:46:41.386341   69940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 21:46:41.435621   69940 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 21:46:41.435690   69940 start.go:495] detecting cgroup driver to use...
	I0912 21:46:41.435737   69940 detect.go:190] detected "systemd" cgroup driver on host os
	I0912 21:46:41.436121   69940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:46:41.465963   69940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0912 21:46:41.483761   69940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 21:46:41.500944   69940 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0912 21:46:41.501163   69940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0912 21:46:41.519913   69940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 21:46:41.536657   69940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 21:46:41.554994   69940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 21:46:41.572285   69940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 21:46:41.588529   69940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 21:46:41.605783   69940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0912 21:46:41.622457   69940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0912 21:46:41.639495   69940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 21:46:41.654641   69940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 21:46:41.669714   69940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:46:41.810644   69940 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 21:46:42.035548   69940 start.go:495] detecting cgroup driver to use...
	I0912 21:46:42.035610   69940 detect.go:190] detected "systemd" cgroup driver on host os
	I0912 21:46:42.035699   69940 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0912 21:46:42.109199   69940 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0912 21:46:42.109308   69940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 21:46:42.151003   69940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:46:42.199177   69940 ssh_runner.go:195] Run: which cri-dockerd
	I0912 21:46:42.207435   69940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0912 21:46:42.229970   69940 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0912 21:46:42.279564   69940 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0912 21:46:42.512322   69940 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0912 21:46:42.750642   69940 docker.go:574] configuring docker to use "systemd" as cgroup driver...
	I0912 21:46:42.750833   69940 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0912 21:46:42.788701   69940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:46:42.925937   69940 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 21:46:43.405664   69940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0912 21:46:43.426929   69940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0912 21:46:43.446539   69940 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0912 21:46:43.588320   69940 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 21:46:43.729539   69940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:46:43.868746   69940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0912 21:46:43.898554   69940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0912 21:46:43.918693   69940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:46:44.059394   69940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0912 21:46:44.177994   69940 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0912 21:46:44.178155   69940 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0912 21:46:44.188096   69940 start.go:563] Will wait 60s for crictl version
	I0912 21:46:44.188210   69940 ssh_runner.go:195] Run: which crictl
	I0912 21:46:44.196142   69940 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 21:46:44.254781   69940 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0912 21:46:44.254907   69940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 21:46:44.299238   69940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 21:46:44.352803   69940 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0912 21:46:44.352984   69940 cli_runner.go:164] Run: docker network inspect addons-331995 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 21:46:44.380002   69940 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0912 21:46:44.386035   69940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:46:44.408949   69940 out.go:177]   - kubelet.cgroups-per-qos=false
	I0912 21:46:44.414572   69940 out.go:177]   - kubelet.enforce-node-allocatable=""
	I0912 21:46:44.422631   69940 kubeadm.go:883] updating cluster {Name:addons-331995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-331995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 21:46:44.422838   69940 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 21:46:44.422979   69940 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 21:46:44.456297   69940 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0912 21:46:44.456410   69940 docker.go:615] Images already preloaded, skipping extraction
	I0912 21:46:44.456605   69940 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 21:46:44.490743   69940 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0912 21:46:44.490798   69940 cache_images.go:84] Images are preloaded, skipping loading
	I0912 21:46:44.490814   69940 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0912 21:46:44.490961   69940 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable="" --hostname-override=addons-331995 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-331995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 21:46:44.491072   69940 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0912 21:46:44.573046   69940 cni.go:84] Creating CNI manager for ""
	I0912 21:46:44.573135   69940 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 21:46:44.573189   69940 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 21:46:44.573261   69940 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-331995 NodeName:addons-331995 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 21:46:44.573501   69940 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-331995"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 21:46:44.573658   69940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 21:46:44.589730   69940 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 21:46:44.589956   69940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 21:46:44.605714   69940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (366 bytes)
	I0912 21:46:44.638162   69940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 21:46:44.669472   69940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0912 21:46:44.699721   69940 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0912 21:46:44.705480   69940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:46:44.724840   69940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:46:44.863901   69940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:46:44.902594   69940 certs.go:68] Setting up /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995 for IP: 192.168.49.2
	I0912 21:46:44.902622   69940 certs.go:194] generating shared ca certs ...
	I0912 21:46:44.902649   69940 certs.go:226] acquiring lock for ca certs: {Name:mk07132fcad645396ad0113bfe1144f20ebd53cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:44.903019   69940 certs.go:240] generating "minikubeCA" ca cert: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/ca.key
	I0912 21:46:45.136562   69940 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/ca.crt ...
	I0912 21:46:45.136605   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/ca.crt: {Name:mkd77991a3935507c9e39e1e8c7352eb64a051a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:45.137041   69940 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/ca.key ...
	I0912 21:46:45.137107   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/ca.key: {Name:mkaded902d60e06576b03be4b279e2d32b5cf911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:45.137441   69940 certs.go:240] generating "proxyClientCA" ca cert: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/proxy-client-ca.key
	I0912 21:46:45.393857   69940 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/proxy-client-ca.crt ...
	I0912 21:46:45.393900   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/proxy-client-ca.crt: {Name:mk1c59db2db7bf45fc1d1c32c142700c002c11a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:45.394371   69940 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/proxy-client-ca.key ...
	I0912 21:46:45.394399   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/proxy-client-ca.key: {Name:mk282dce664465d1df24006e91d6f07a7df93911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:45.394718   69940 certs.go:256] generating profile certs ...
	I0912 21:46:45.394844   69940 certs.go:363] generating signed profile cert for "minikube-user": /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.key
	I0912 21:46:45.394887   69940 crypto.go:68] Generating cert /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt with IP's: []
	I0912 21:46:45.636662   69940 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt ...
	I0912 21:46:45.636707   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt: {Name:mke31e589f83a1c38a024aea52726e376c982342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:45.637192   69940 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.key ...
	I0912 21:46:45.637225   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.key: {Name:mk39b4fda5dce98acf00f584ee063c859bc97327 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:45.637568   69940 certs.go:363] generating signed profile cert for "minikube": /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.key.9ffb3548
	I0912 21:46:45.637628   69940 crypto.go:68] Generating cert /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.crt.9ffb3548 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0912 21:46:45.994706   69940 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.crt.9ffb3548 ...
	I0912 21:46:45.994751   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.crt.9ffb3548: {Name:mk1efb70c236fa62bb69f0dd1d330505bbd1c6d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:45.995215   69940 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.key.9ffb3548 ...
	I0912 21:46:45.995247   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.key.9ffb3548: {Name:mk609e719ed674e4e5f39cc7500611ef674a975b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:45.995565   69940 certs.go:381] copying /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.crt.9ffb3548 -> /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.crt
	I0912 21:46:45.995783   69940 certs.go:385] copying /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.key.9ffb3548 -> /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.key
	I0912 21:46:45.995900   69940 certs.go:363] generating signed profile cert for "aggregator": /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/proxy-client.key
	I0912 21:46:45.995957   69940 crypto.go:68] Generating cert /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/proxy-client.crt with IP's: []
	I0912 21:46:46.176876   69940 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/proxy-client.crt ...
	I0912 21:46:46.176919   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/proxy-client.crt: {Name:mked3e93f8e89d479655f0a83f0cf91acc0dff4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:46.177391   69940 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/proxy-client.key ...
	I0912 21:46:46.177422   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/proxy-client.key: {Name:mkf183ad3f3c9b739996fca014f9b7a3ab18fed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:46:46.177991   69940 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/ca-key.pem (1675 bytes)
	I0912 21:46:46.178091   69940 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/ca.pem (1119 bytes)
	I0912 21:46:46.178157   69940 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/cert.pem (1164 bytes)
	I0912 21:46:46.178236   69940 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/certs/key.pem (1679 bytes)
	I0912 21:46:46.179148   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 21:46:46.220800   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 21:46:46.261620   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 21:46:46.302857   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0912 21:46:46.343368   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0912 21:46:46.384900   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 21:46:46.426413   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 21:46:46.468244   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 21:46:46.515336   69940 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 21:46:46.575446   69940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 21:46:46.614664   69940 ssh_runner.go:195] Run: openssl version
	I0912 21:46:46.623677   69940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 21:46:46.640711   69940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:46:46.646985   69940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:46 /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:46:46.647198   69940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:46:46.658086   69940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 21:46:46.674601   69940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 21:46:46.680510   69940 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0912 21:46:46.680675   69940 kubeadm.go:392] StartCluster: {Name:addons-331995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-331995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:46:46.680914   69940 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0912 21:46:46.710471   69940 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 21:46:46.726383   69940 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 21:46:46.742178   69940 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0912 21:46:46.742353   69940 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 21:46:46.757766   69940 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 21:46:46.757791   69940 kubeadm.go:157] found existing configuration files:
	
	I0912 21:46:46.757889   69940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 21:46:46.773816   69940 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 21:46:46.773956   69940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 21:46:46.788964   69940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 21:46:46.804401   69940 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 21:46:46.804535   69940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 21:46:46.819081   69940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 21:46:46.834253   69940 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 21:46:46.834468   69940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 21:46:46.849366   69940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 21:46:46.865086   69940 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 21:46:46.865202   69940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 21:46:46.880459   69940 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0912 21:46:46.940128   69940 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0912 21:46:46.940282   69940 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 21:46:47.065304   69940 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 21:46:47.065529   69940 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 21:46:47.065805   69940 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0912 21:46:47.084539   69940 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 21:46:47.089717   69940 out.go:235]   - Generating certificates and keys ...
	I0912 21:46:47.089881   69940 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 21:46:47.090001   69940 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 21:46:47.283203   69940 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 21:46:47.425832   69940 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0912 21:46:47.774554   69940 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0912 21:46:47.971522   69940 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0912 21:46:48.125270   69940 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0912 21:46:48.125919   69940 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-331995 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0912 21:46:48.255292   69940 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0912 21:46:48.255825   69940 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-331995 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0912 21:46:48.576713   69940 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 21:46:48.665241   69940 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 21:46:48.912557   69940 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0912 21:46:48.912922   69940 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 21:46:49.070523   69940 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 21:46:49.161285   69940 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0912 21:46:49.288107   69940 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 21:46:49.390656   69940 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 21:46:49.527646   69940 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 21:46:49.535586   69940 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 21:46:49.535719   69940 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 21:46:49.539642   69940 out.go:235]   - Booting up control plane ...
	I0912 21:46:49.539823   69940 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 21:46:49.539961   69940 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 21:46:49.540105   69940 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 21:46:49.567872   69940 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 21:46:49.577736   69940 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 21:46:49.577843   69940 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 21:46:49.741755   69940 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0912 21:46:49.742025   69940 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0912 21:46:50.241387   69940 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.749206ms
	I0912 21:46:50.241586   69940 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0912 21:46:57.245270   69940 kubeadm.go:310] [api-check] The API server is healthy after 7.003793097s
	I0912 21:46:57.268751   69940 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 21:46:57.290537   69940 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 21:46:57.359707   69940 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 21:46:57.360167   69940 kubeadm.go:310] [mark-control-plane] Marking the node addons-331995 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 21:46:57.420897   69940 kubeadm.go:310] [bootstrap-token] Using token: eviub1.04vr1snjz6iiyfq2
	I0912 21:46:57.425241   69940 out.go:235]   - Configuring RBAC rules ...
	I0912 21:46:57.425709   69940 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 21:46:57.453711   69940 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 21:46:57.487364   69940 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 21:46:57.494407   69940 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 21:46:57.504521   69940 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 21:46:57.512811   69940 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 21:46:57.655762   69940 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 21:46:58.275732   69940 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 21:46:58.659405   69940 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 21:46:58.661475   69940 kubeadm.go:310] 
	I0912 21:46:58.661624   69940 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 21:46:58.661636   69940 kubeadm.go:310] 
	I0912 21:46:58.661900   69940 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 21:46:58.661918   69940 kubeadm.go:310] 
	I0912 21:46:58.661994   69940 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 21:46:58.662152   69940 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 21:46:58.662263   69940 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 21:46:58.662274   69940 kubeadm.go:310] 
	I0912 21:46:58.662384   69940 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 21:46:58.662394   69940 kubeadm.go:310] 
	I0912 21:46:58.662495   69940 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 21:46:58.662505   69940 kubeadm.go:310] 
	I0912 21:46:58.662613   69940 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 21:46:58.662772   69940 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 21:46:58.662929   69940 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 21:46:58.662939   69940 kubeadm.go:310] 
	I0912 21:46:58.663258   69940 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 21:46:58.663432   69940 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 21:46:58.663444   69940 kubeadm.go:310] 
	I0912 21:46:58.663620   69940 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token eviub1.04vr1snjz6iiyfq2 \
	I0912 21:46:58.663851   69940 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:40c3b7faca2e9b71afa48ddc2040f3d4018e7f54d7a41f332b5ec5aea93a2e14 \
	I0912 21:46:58.664317   69940 kubeadm.go:310] 	--control-plane 
	I0912 21:46:58.664338   69940 kubeadm.go:310] 
	I0912 21:46:58.664503   69940 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 21:46:58.664514   69940 kubeadm.go:310] 
	I0912 21:46:58.664680   69940 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token eviub1.04vr1snjz6iiyfq2 \
	I0912 21:46:58.664893   69940 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:40c3b7faca2e9b71afa48ddc2040f3d4018e7f54d7a41f332b5ec5aea93a2e14 
	I0912 21:46:58.670534   69940 kubeadm.go:310] W0912 21:46:46.935997    1691 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:46:58.671125   69940 kubeadm.go:310] W0912 21:46:46.937105    1691 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:46:58.671368   69940 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 21:46:58.671446   69940 cni.go:84] Creating CNI manager for ""
	I0912 21:46:58.671476   69940 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 21:46:58.675563   69940 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 21:46:58.679321   69940 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 21:46:58.696746   69940 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 21:46:58.733918   69940 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 21:46:58.734041   69940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-331995 minikube.k8s.io/updated_at=2024_09_12T21_46_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=addons-331995 minikube.k8s.io/primary=true
	I0912 21:46:58.733936   69940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:46:58.962174   69940 ops.go:34] apiserver oom_adj: -16
	I0912 21:46:58.962316   69940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:46:59.463281   69940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:46:59.963412   69940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:47:00.462514   69940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:47:00.962887   69940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:47:01.462919   69940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:47:01.963111   69940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:47:02.463357   69940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:47:02.600871   69940 kubeadm.go:1113] duration metric: took 3.867041357s to wait for elevateKubeSystemPrivileges
	I0912 21:47:02.600910   69940 kubeadm.go:394] duration metric: took 15.920243177s to StartCluster
	I0912 21:47:02.600941   69940 settings.go:142] acquiring lock: {Name:mk841109e15a3b6330c92e1dba5779a890c1b040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:47:02.601346   69940 settings.go:150] Updating kubeconfig:  /home/g528047478195_compute/minikube-integration/19616-63719/kubeconfig
	I0912 21:47:02.602154   69940 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19616-63719/kubeconfig: {Name:mk2e26d24f77797e24558e31cf6990f1997e9f71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:47:02.602665   69940 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 21:47:02.602904   69940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 21:47:02.603371   69940 config.go:182] Loaded profile config "addons-331995": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 21:47:02.603431   69940 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0912 21:47:02.603550   69940 addons.go:69] Setting yakd=true in profile "addons-331995"
	I0912 21:47:02.603599   69940 addons.go:234] Setting addon yakd=true in "addons-331995"
	I0912 21:47:02.603648   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.604861   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.605169   69940 addons.go:69] Setting inspektor-gadget=true in profile "addons-331995"
	I0912 21:47:02.605210   69940 addons.go:234] Setting addon inspektor-gadget=true in "addons-331995"
	I0912 21:47:02.605248   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.605971   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.606728   69940 addons.go:69] Setting metrics-server=true in profile "addons-331995"
	I0912 21:47:02.606798   69940 addons.go:234] Setting addon metrics-server=true in "addons-331995"
	I0912 21:47:02.606823   69940 addons.go:69] Setting cloud-spanner=true in profile "addons-331995"
	I0912 21:47:02.606843   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.606860   69940 addons.go:234] Setting addon cloud-spanner=true in "addons-331995"
	I0912 21:47:02.606915   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.607562   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.607566   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.610945   69940 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-331995"
	I0912 21:47:02.611091   69940 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-331995"
	I0912 21:47:02.611151   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.611459   69940 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-331995"
	I0912 21:47:02.611634   69940 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-331995"
	I0912 21:47:02.611845   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.612098   69940 addons.go:69] Setting default-storageclass=true in profile "addons-331995"
	I0912 21:47:02.612185   69940 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-331995"
	I0912 21:47:02.612991   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.613946   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.622156   69940 addons.go:69] Setting gcp-auth=true in profile "addons-331995"
	I0912 21:47:02.622217   69940 mustload.go:65] Loading cluster: addons-331995
	I0912 21:47:02.622736   69940 config.go:182] Loaded profile config "addons-331995": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 21:47:02.623313   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.623828   69940 addons.go:69] Setting registry=true in profile "addons-331995"
	I0912 21:47:02.623905   69940 addons.go:234] Setting addon registry=true in "addons-331995"
	I0912 21:47:02.623969   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.624827   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.634347   69940 addons.go:69] Setting helm-tiller=true in profile "addons-331995"
	I0912 21:47:02.634449   69940 addons.go:234] Setting addon helm-tiller=true in "addons-331995"
	I0912 21:47:02.634516   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.635355   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.638783   69940 addons.go:69] Setting storage-provisioner=true in profile "addons-331995"
	I0912 21:47:02.638869   69940 addons.go:234] Setting addon storage-provisioner=true in "addons-331995"
	I0912 21:47:02.638926   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.639961   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.661949   69940 addons.go:69] Setting ingress=true in profile "addons-331995"
	I0912 21:47:02.662081   69940 addons.go:234] Setting addon ingress=true in "addons-331995"
	I0912 21:47:02.662165   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.663043   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.666400   69940 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-331995"
	I0912 21:47:02.666498   69940 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-331995"
	I0912 21:47:02.667041   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.680380   69940 addons.go:69] Setting ingress-dns=true in profile "addons-331995"
	I0912 21:47:02.680461   69940 addons.go:234] Setting addon ingress-dns=true in "addons-331995"
	I0912 21:47:02.680533   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.681382   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.688341   69940 addons.go:69] Setting volcano=true in profile "addons-331995"
	I0912 21:47:02.688557   69940 addons.go:234] Setting addon volcano=true in "addons-331995"
	I0912 21:47:02.688676   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.689759   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.721377   69940 addons.go:69] Setting volumesnapshots=true in profile "addons-331995"
	I0912 21:47:02.721484   69940 addons.go:234] Setting addon volumesnapshots=true in "addons-331995"
	I0912 21:47:02.721559   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:02.722633   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.725963   69940 out.go:177] * Verifying Kubernetes components...
	I0912 21:47:02.732168   69940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:47:02.861576   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:02.895815   69940 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0912 21:47:02.903665   69940 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0912 21:47:02.903790   69940 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0912 21:47:02.903966   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:02.907465   69940 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0912 21:47:02.914328   69940 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0912 21:47:02.914445   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0912 21:47:02.914625   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.142536   69940 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0912 21:47:03.149399   69940 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 21:47:03.149894   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0912 21:47:03.153117   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.238795   69940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0912 21:47:03.252154   69940 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0912 21:47:03.257230   69940 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0912 21:47:03.257275   69940 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0912 21:47:03.257439   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.297455   69940 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0912 21:47:03.306985   69940 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 21:47:03.307107   69940 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 21:47:03.307254   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.357918   69940 out.go:177]   - Using image docker.io/registry:2.8.3
	I0912 21:47:03.382397   69940 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0912 21:47:03.390349   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:03.395776   69940 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0912 21:47:03.395944   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0912 21:47:03.396340   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.424211   69940 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0912 21:47:03.429257   69940 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 21:47:03.429291   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0912 21:47:03.429398   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.466592   69940 addons.go:234] Setting addon default-storageclass=true in "addons-331995"
	I0912 21:47:03.466783   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:03.467802   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:03.475103   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:03.479671   69940 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0912 21:47:03.483156   69940 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0912 21:47:03.483288   69940 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0912 21:47:03.483476   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.514733   69940 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-331995"
	I0912 21:47:03.514802   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:03.515898   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:03.528988   69940 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0912 21:47:03.537921   69940 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0912 21:47:03.544167   69940 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:47:03.549107   69940 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0912 21:47:03.552093   69940 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:47:03.554801   69940 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 21:47:03.555853   69940 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0912 21:47:03.561225   69940 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:47:03.561283   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 21:47:03.561412   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.570113   69940 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0912 21:47:03.575932   69940 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0912 21:47:03.581860   69940 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0912 21:47:03.556334   69940 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 21:47:03.582123   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0912 21:47:03.582084   69940 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0912 21:47:03.582318   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.586243   69940 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0912 21:47:03.586272   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0912 21:47:03.586375   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.627136   69940 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0912 21:47:03.635130   69940 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0912 21:47:03.636530   69940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:47:03.638576   69940 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0912 21:47:03.638612   69940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0912 21:47:03.638744   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.735887   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:03.812641   69940 cli_runner.go:217] Completed: docker container inspect addons-331995 --format={{.State.Status}}: (1.122817175s)
	I0912 21:47:03.818157   69940 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0912 21:47:03.822292   69940 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0912 21:47:03.831657   69940 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0912 21:47:03.842804   69940 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0912 21:47:03.842919   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0912 21:47:03.843207   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:03.861319   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:03.886757   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:03.907313   69940 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0912 21:47:03.910965   69940 out.go:177]   - Using image docker.io/busybox:stable
	I0912 21:47:03.916784   69940 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 21:47:03.916820   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0912 21:47:03.916950   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:04.041952   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.105330   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.108358   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.225280   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.245470   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.261341   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.263915   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.303219   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.308767   69940 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 21:47:04.308813   69940 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 21:47:04.308917   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:04.332244   69940 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0912 21:47:04.332277   69940 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0912 21:47:04.334487   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.337683   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.393592   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:04.622506   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0912 21:47:04.638973   69940 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0912 21:47:04.639109   69940 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0912 21:47:04.744426   69940 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0912 21:47:04.744542   69940 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0912 21:47:04.783484   69940 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 21:47:04.783633   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0912 21:47:04.916138   69940 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0912 21:47:04.916175   69940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0912 21:47:05.045377   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 21:47:05.096299   69940 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0912 21:47:05.096334   69940 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0912 21:47:05.121206   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 21:47:05.124081   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 21:47:05.154526   69940 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 21:47:05.154560   69940 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 21:47:05.158505   69940 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0912 21:47:05.158555   69940 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0912 21:47:05.264483   69940 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0912 21:47:05.264518   69940 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0912 21:47:05.264803   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 21:47:05.377281   69940 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0912 21:47:05.377313   69940 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0912 21:47:05.409810   69940 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0912 21:47:05.409844   69940 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0912 21:47:05.430531   69940 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0912 21:47:05.430567   69940 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0912 21:47:05.444452   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:47:05.459012   69940 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0912 21:47:05.459074   69940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0912 21:47:05.470543   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 21:47:05.511380   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0912 21:47:05.538545   69940 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0912 21:47:05.538586   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0912 21:47:05.738940   69940 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 21:47:05.738976   69940 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 21:47:05.796139   69940 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0912 21:47:05.796176   69940 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0912 21:47:05.918486   69940 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0912 21:47:05.918517   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0912 21:47:05.924031   69940 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0912 21:47:05.924077   69940 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0912 21:47:05.949069   69940 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0912 21:47:05.949103   69940 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0912 21:47:05.969506   69940 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0912 21:47:05.969537   69940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0912 21:47:06.015842   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0912 21:47:06.197510   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 21:47:06.281535   69940 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0912 21:47:06.281571   69940 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0912 21:47:06.364447   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0912 21:47:06.439229   69940 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0912 21:47:06.439269   69940 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0912 21:47:06.469286   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0912 21:47:06.483718   69940 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0912 21:47:06.483758   69940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0912 21:47:06.831412   69940 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0912 21:47:06.831452   69940 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0912 21:47:06.954365   69940 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.715445561s)
	I0912 21:47:06.954419   69940 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0912 21:47:06.956396   69940 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.319778628s)
	I0912 21:47:06.958067   69940 node_ready.go:35] waiting up to 6m0s for node "addons-331995" to be "Ready" ...
	I0912 21:47:07.016366   69940 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0912 21:47:07.016408   69940 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0912 21:47:07.312358   69940 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0912 21:47:07.312393   69940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0912 21:47:07.665908   69940 node_ready.go:49] node "addons-331995" has status "Ready":"True"
	I0912 21:47:07.665943   69940 node_ready.go:38] duration metric: took 707.835474ms for node "addons-331995" to be "Ready" ...
	I0912 21:47:07.665960   69940 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:47:07.726775   69940 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 21:47:07.726813   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0912 21:47:07.758287   69940 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:47:07.758315   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0912 21:47:08.022483   69940 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0912 21:47:08.022516   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0912 21:47:08.431035   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 21:47:08.515447   69940 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0912 21:47:08.515484   69940 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0912 21:47:08.535153   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:47:08.737345   69940 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0912 21:47:08.737387   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0912 21:47:09.208337   69940 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0912 21:47:09.208371   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	W0912 21:47:09.520267   69940 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "addons-331995" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0912 21:47:09.520313   69940 start.go:160] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0912 21:47:09.557224   69940 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:09.882093   69940 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 21:47:09.882151   69940 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0912 21:47:10.346376   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 21:47:12.850190   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-12 21:47:05 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0912 21:47:13.749314   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.126647445s)
	I0912 21:47:15.219653   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:17.632114   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:19.646249   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (14.524985272s)
	I0912 21:47:19.646390   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (14.52227182s)
	I0912 21:47:19.646545   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (14.601133488s)
	I0912 21:47:20.416242   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:22.604351   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:22.990670   69940 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0912 21:47:22.990997   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:23.090291   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:23.460850   69940 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0912 21:47:23.584505   69940 addons.go:234] Setting addon gcp-auth=true in "addons-331995"
	I0912 21:47:23.584578   69940 host.go:66] Checking if "addons-331995" exists ...
	I0912 21:47:23.585417   69940 cli_runner.go:164] Run: docker container inspect addons-331995 --format={{.State.Status}}
	I0912 21:47:23.656267   69940 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0912 21:47:23.656420   69940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-331995
	I0912 21:47:23.728843   69940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/addons-331995/id_rsa Username:docker}
	I0912 21:47:24.675477   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:27.096207   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:29.217989   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:29.317812   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (24.052968507s)
	I0912 21:47:29.317860   69940 addons.go:475] Verifying addon ingress=true in "addons-331995"
	I0912 21:47:29.318321   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (23.873826599s)
	I0912 21:47:29.318432   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (23.84785425s)
	I0912 21:47:29.321120   69940 out.go:177] * Verifying ingress addon...
	I0912 21:47:29.326081   69940 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0912 21:47:29.732256   69940 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0912 21:47:29.732377   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:29.928765   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:31.429067   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:31.507389   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:31.632544   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:32.205252   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:32.617438   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:33.020098   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:33.503156   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:33.702318   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:33.930399   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:34.686971   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:34.762704   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (29.251269421s)
	I0912 21:47:34.763091   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (28.747096607s)
	I0912 21:47:34.763174   69940 addons.go:475] Verifying addon registry=true in "addons-331995"
	I0912 21:47:34.763298   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (28.56573774s)
	I0912 21:47:34.763514   69940 addons.go:475] Verifying addon metrics-server=true in "addons-331995"
	I0912 21:47:34.763958   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (28.399451614s)
	I0912 21:47:34.764150   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (28.294818834s)
	I0912 21:47:34.764371   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (26.333282697s)
	I0912 21:47:34.764614   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (26.229423025s)
	W0912 21:47:34.764683   69940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 21:47:34.764762   69940 retry.go:31] will retry after 220.938937ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 21:47:34.767449   69940 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-331995 service yakd-dashboard -n yakd-dashboard
	
	I0912 21:47:34.767785   69940 out.go:177] * Verifying registry addon...
	I0912 21:47:34.773331   69940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0912 21:47:34.986820   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:47:35.378968   69940 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0912 21:47:35.379129   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:35.381623   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:35.887762   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:35.891746   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:36.069203   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:36.076593   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:36.081297   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:36.709261   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:36.712091   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:36.819876   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (26.473344456s)
	I0912 21:47:36.820037   69940 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-331995"
	I0912 21:47:36.820827   69940 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (13.164524688s)
	I0912 21:47:36.823949   69940 out.go:177] * Verifying csi-hostpath-driver addon...
	I0912 21:47:36.824346   69940 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0912 21:47:36.829299   69940 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:47:36.830212   69940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0912 21:47:36.832986   69940 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0912 21:47:36.833114   69940 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0912 21:47:36.976762   69940 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0912 21:47:36.976895   69940 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0912 21:47:37.044983   69940 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 21:47:37.045013   69940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0912 21:47:37.136168   69940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 21:47:37.239527   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:37.240456   69940 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0912 21:47:37.240548   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:37.242189   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:37.431859   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:37.590777   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:37.625272   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:38.181037   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:38.185271   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:38.190263   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:38.261548   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:38.620592   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:38.630363   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:38.635885   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:39.105382   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:39.110725   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:39.112399   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:39.323447   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:39.530324   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:39.530623   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:39.859110   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:39.860232   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:39.861290   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:39.905482   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.918592786s)
	I0912 21:47:40.154576   69940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.018266504s)
	I0912 21:47:40.162944   69940 addons.go:475] Verifying addon gcp-auth=true in "addons-331995"
	I0912 21:47:40.167283   69940 out.go:177] * Verifying gcp-auth addon...
	I0912 21:47:40.174036   69940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0912 21:47:40.198322   69940 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0912 21:47:40.299719   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:40.342674   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:40.355912   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:40.566976   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:40.778846   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:40.833578   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:40.838380   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:41.296727   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:41.344005   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:41.357447   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:41.787497   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:41.837073   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:41.865267   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:42.297170   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:42.351194   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:42.366806   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:42.586658   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:42.787775   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:42.850126   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:42.852545   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:43.285358   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:43.337027   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:43.349090   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:43.780318   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:43.833935   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:43.844103   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:44.280231   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:44.333967   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:44.343825   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:44.780782   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:44.836077   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:44.848509   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:45.077031   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:45.283193   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:45.336801   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:45.346384   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:45.786878   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:45.854758   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:45.860692   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:46.293332   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:46.346544   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:46.359759   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:46.787654   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:46.834326   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:46.839741   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:47.293360   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:47.335745   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:47.342239   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:47.570984   69940 pod_ready.go:103] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"False"
	I0912 21:47:47.780263   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:47.836722   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:47.846490   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:48.111990   69940 pod_ready.go:93] pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace has status "Ready":"True"
	I0912 21:47:48.112027   69940 pod_ready.go:82] duration metric: took 38.554754932s for pod "coredns-7c65d6cfc9-6p998" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.112045   69940 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vhwzq" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.133582   69940 pod_ready.go:93] pod "coredns-7c65d6cfc9-vhwzq" in "kube-system" namespace has status "Ready":"True"
	I0912 21:47:48.133620   69940 pod_ready.go:82] duration metric: took 21.507818ms for pod "coredns-7c65d6cfc9-vhwzq" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.133639   69940 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-331995" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.146783   69940 pod_ready.go:93] pod "etcd-addons-331995" in "kube-system" namespace has status "Ready":"True"
	I0912 21:47:48.146821   69940 pod_ready.go:82] duration metric: took 13.17035ms for pod "etcd-addons-331995" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.146865   69940 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-331995" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.161721   69940 pod_ready.go:93] pod "kube-apiserver-addons-331995" in "kube-system" namespace has status "Ready":"True"
	I0912 21:47:48.161794   69940 pod_ready.go:82] duration metric: took 14.911609ms for pod "kube-apiserver-addons-331995" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.161817   69940 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-331995" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.178200   69940 pod_ready.go:93] pod "kube-controller-manager-addons-331995" in "kube-system" namespace has status "Ready":"True"
	I0912 21:47:48.178257   69940 pod_ready.go:82] duration metric: took 16.427127ms for pod "kube-controller-manager-addons-331995" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.178277   69940 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9slnj" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.339924   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:48.345955   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:48.355610   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:48.467892   69940 pod_ready.go:93] pod "kube-proxy-9slnj" in "kube-system" namespace has status "Ready":"True"
	I0912 21:47:48.467926   69940 pod_ready.go:82] duration metric: took 289.63631ms for pod "kube-proxy-9slnj" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.467943   69940 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-331995" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.783963   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:48.835169   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:48.844946   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:48.866724   69940 pod_ready.go:93] pod "kube-scheduler-addons-331995" in "kube-system" namespace has status "Ready":"True"
	I0912 21:47:48.866763   69940 pod_ready.go:82] duration metric: took 398.808683ms for pod "kube-scheduler-addons-331995" in "kube-system" namespace to be "Ready" ...
	I0912 21:47:48.866777   69940 pod_ready.go:39] duration metric: took 41.200798834s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:47:48.866812   69940 api_server.go:52] waiting for apiserver process to appear ...
	I0912 21:47:48.866916   69940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:47:48.906923   69940 api_server.go:72] duration metric: took 46.304203247s to wait for apiserver process to appear ...
	I0912 21:47:48.907127   69940 api_server.go:88] waiting for apiserver healthz status ...
	I0912 21:47:48.907205   69940 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0912 21:47:48.917509   69940 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0912 21:47:48.919581   69940 api_server.go:141] control plane version: v1.31.1
	I0912 21:47:48.919688   69940 api_server.go:131] duration metric: took 12.503799ms to wait for apiserver health ...
	I0912 21:47:48.919727   69940 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 21:47:49.084148   69940 system_pods.go:59] 19 kube-system pods found
	I0912 21:47:49.084219   69940 system_pods.go:61] "coredns-7c65d6cfc9-6p998" [18897be7-b902-4875-b941-ae33609d6ad3] Running
	I0912 21:47:49.084312   69940 system_pods.go:61] "coredns-7c65d6cfc9-vhwzq" [15cf2078-0cd4-4aee-af43-8e6982db1d9f] Running
	I0912 21:47:49.084409   69940 system_pods.go:61] "csi-hostpath-attacher-0" [876fb446-0031-409c-9b88-6ee9dbab79e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 21:47:49.084439   69940 system_pods.go:61] "csi-hostpath-resizer-0" [ca55e341-cd77-4896-92d1-1e316d2d7b95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 21:47:49.084502   69940 system_pods.go:61] "csi-hostpathplugin-ssw8n" [0fae5f54-937e-4802-b06f-184fcefc7ded] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 21:47:49.084535   69940 system_pods.go:61] "etcd-addons-331995" [ff9bbf64-85f3-4174-98da-3f3fff1de6e6] Running
	I0912 21:47:49.084549   69940 system_pods.go:61] "kube-apiserver-addons-331995" [441badbc-59a0-417e-8e96-21fce09febc8] Running
	I0912 21:47:49.084582   69940 system_pods.go:61] "kube-controller-manager-addons-331995" [32b0fae0-1656-4310-a0db-8ac8ebd06b24] Running
	I0912 21:47:49.084603   69940 system_pods.go:61] "kube-ingress-dns-minikube" [ff5b18d1-e63d-4b20-9301-4fe6a2d2b7f6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0912 21:47:49.084628   69940 system_pods.go:61] "kube-proxy-9slnj" [fa2c6af2-4383-4d79-a6b6-8eee8fa882ef] Running
	I0912 21:47:49.084642   69940 system_pods.go:61] "kube-scheduler-addons-331995" [757291c4-4137-4c38-b3bf-c797be72627f] Running
	I0912 21:47:49.084656   69940 system_pods.go:61] "metrics-server-84c5f94fbc-qj8c7" [10575e3b-51e3-4a17-9911-8ed2245ed9c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 21:47:49.084667   69940 system_pods.go:61] "nvidia-device-plugin-daemonset-4sqcf" [2bb7e4c9-91fb-4914-ab76-7ffc5517e40d] Running
	I0912 21:47:49.084675   69940 system_pods.go:61] "registry-66c9cd494c-6jhvv" [07e442c7-a078-4cde-aa3c-fad57aac4c18] Running
	I0912 21:47:49.084710   69940 system_pods.go:61] "registry-proxy-rr7bm" [c0051f59-18dc-4684-b682-f4a992ea12a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 21:47:49.084729   69940 system_pods.go:61] "snapshot-controller-56fcc65765-mjfwh" [9163427d-b44f-4930-96ac-3890d3cf0f2b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:47:49.084767   69940 system_pods.go:61] "snapshot-controller-56fcc65765-zbzzp" [b78b8a57-7f74-4cfa-a4e6-cd9904249298] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:47:49.084780   69940 system_pods.go:61] "storage-provisioner" [e82e018d-5b98-4b82-a817-a66b52cebf28] Running
	I0912 21:47:49.084792   69940 system_pods.go:61] "tiller-deploy-b48cc5f79-pxwc5" [c1f59ae3-6c11-42a9-b480-6fb541264acf] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0912 21:47:49.084807   69940 system_pods.go:74] duration metric: took 165.048833ms to wait for pod list to return data ...
	I0912 21:47:49.084847   69940 default_sa.go:34] waiting for default service account to be created ...
	I0912 21:47:49.265495   69940 default_sa.go:45] found service account: "default"
	I0912 21:47:49.265616   69940 default_sa.go:55] duration metric: took 180.740943ms for default service account to be created ...
	I0912 21:47:49.265697   69940 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 21:47:49.281459   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:49.347172   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:49.347833   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:49.497282   69940 system_pods.go:86] 19 kube-system pods found
	I0912 21:47:49.497412   69940 system_pods.go:89] "coredns-7c65d6cfc9-6p998" [18897be7-b902-4875-b941-ae33609d6ad3] Running
	I0912 21:47:49.497450   69940 system_pods.go:89] "coredns-7c65d6cfc9-vhwzq" [15cf2078-0cd4-4aee-af43-8e6982db1d9f] Running
	I0912 21:47:49.497485   69940 system_pods.go:89] "csi-hostpath-attacher-0" [876fb446-0031-409c-9b88-6ee9dbab79e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 21:47:49.497529   69940 system_pods.go:89] "csi-hostpath-resizer-0" [ca55e341-cd77-4896-92d1-1e316d2d7b95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 21:47:49.497594   69940 system_pods.go:89] "csi-hostpathplugin-ssw8n" [0fae5f54-937e-4802-b06f-184fcefc7ded] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 21:47:49.497638   69940 system_pods.go:89] "etcd-addons-331995" [ff9bbf64-85f3-4174-98da-3f3fff1de6e6] Running
	I0912 21:47:49.497679   69940 system_pods.go:89] "kube-apiserver-addons-331995" [441badbc-59a0-417e-8e96-21fce09febc8] Running
	I0912 21:47:49.497710   69940 system_pods.go:89] "kube-controller-manager-addons-331995" [32b0fae0-1656-4310-a0db-8ac8ebd06b24] Running
	I0912 21:47:49.497754   69940 system_pods.go:89] "kube-ingress-dns-minikube" [ff5b18d1-e63d-4b20-9301-4fe6a2d2b7f6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0912 21:47:49.497784   69940 system_pods.go:89] "kube-proxy-9slnj" [fa2c6af2-4383-4d79-a6b6-8eee8fa882ef] Running
	I0912 21:47:49.497813   69940 system_pods.go:89] "kube-scheduler-addons-331995" [757291c4-4137-4c38-b3bf-c797be72627f] Running
	I0912 21:47:49.497917   69940 system_pods.go:89] "metrics-server-84c5f94fbc-qj8c7" [10575e3b-51e3-4a17-9911-8ed2245ed9c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 21:47:49.497977   69940 system_pods.go:89] "nvidia-device-plugin-daemonset-4sqcf" [2bb7e4c9-91fb-4914-ab76-7ffc5517e40d] Running
	I0912 21:47:49.498008   69940 system_pods.go:89] "registry-66c9cd494c-6jhvv" [07e442c7-a078-4cde-aa3c-fad57aac4c18] Running
	I0912 21:47:49.498039   69940 system_pods.go:89] "registry-proxy-rr7bm" [c0051f59-18dc-4684-b682-f4a992ea12a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 21:47:49.498248   69940 system_pods.go:89] "snapshot-controller-56fcc65765-mjfwh" [9163427d-b44f-4930-96ac-3890d3cf0f2b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:47:49.498317   69940 system_pods.go:89] "snapshot-controller-56fcc65765-zbzzp" [b78b8a57-7f74-4cfa-a4e6-cd9904249298] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:47:49.498357   69940 system_pods.go:89] "storage-provisioner" [e82e018d-5b98-4b82-a817-a66b52cebf28] Running
	I0912 21:47:49.498402   69940 system_pods.go:89] "tiller-deploy-b48cc5f79-pxwc5" [c1f59ae3-6c11-42a9-b480-6fb541264acf] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0912 21:47:49.498435   69940 system_pods.go:126] duration metric: took 232.695943ms to wait for k8s-apps to be running ...
	I0912 21:47:49.498496   69940 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 21:47:49.498640   69940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:47:49.529897   69940 system_svc.go:56] duration metric: took 31.369692ms WaitForService to wait for kubelet
	I0912 21:47:49.530021   69940 kubeadm.go:582] duration metric: took 46.927304422s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:47:49.530118   69940 node_conditions.go:102] verifying NodePressure condition ...
	I0912 21:47:49.666511   69940 node_conditions.go:122] node storage ephemeral capacity is 119475748Ki
	I0912 21:47:49.666619   69940 node_conditions.go:123] node cpu capacity is 2
	I0912 21:47:49.666660   69940 node_conditions.go:105] duration metric: took 136.516491ms to run NodePressure ...
	I0912 21:47:49.666701   69940 start.go:241] waiting for startup goroutines ...
	I0912 21:47:49.780615   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:49.837598   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:49.848634   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:50.281807   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:50.341857   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:50.342789   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:50.788610   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:50.842673   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:50.853365   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:51.291597   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:51.340761   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:51.348012   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:51.780288   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:51.838813   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:51.839797   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:52.313936   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:52.332715   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:52.338834   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:52.815908   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:53.285926   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:53.286190   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:53.289150   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:53.505094   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:53.510458   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:53.787014   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:53.833651   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:53.840327   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:54.282694   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:54.349518   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:54.383360   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:54.781892   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:54.842943   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:54.858943   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:55.293466   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:55.395172   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:55.402006   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:55.803905   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:55.868250   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:55.870919   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:56.281081   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:56.378063   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:56.380009   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:56.796621   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:56.835939   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:56.857625   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:57.284270   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:57.357764   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:57.369570   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:57.792424   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:57.859754   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:57.863344   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:58.311602   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:58.374134   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:58.387188   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:58.792978   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:58.843290   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:58.847855   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:59.316158   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:59.361827   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:59.364224   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:47:59.783856   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:47:59.845956   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:59.846292   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:00.281275   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:48:00.335770   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:00.341044   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:00.797268   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:48:00.911147   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:00.913622   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:01.296983   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:48:01.345232   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:01.347647   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:01.781722   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:48:01.839524   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:01.846260   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:02.283565   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:48:02.343628   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:02.351046   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:02.779305   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:48:02.835451   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:02.840581   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:03.279078   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:48:03.331903   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:03.338348   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:03.907243   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:48:03.908111   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:03.909201   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:04.281528   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:48:04.368668   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:04.369818   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:04.782038   69940 kapi.go:107] duration metric: took 30.00871618s to wait for kubernetes.io/minikube-addons=registry ...
	I0912 21:48:04.837527   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:04.854216   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:05.345185   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:05.358722   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:05.846452   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:05.849874   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:06.445916   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:06.448160   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:06.837661   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:06.844340   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:07.346911   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:07.353814   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:07.862252   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:07.876006   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:08.338248   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:08.343179   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:08.837461   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:08.842322   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:09.343939   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:09.344406   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:09.836939   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:09.842267   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:10.336085   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:10.341187   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:10.834379   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:10.842322   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:11.343715   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:11.345690   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:11.880794   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:11.931834   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:12.341960   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:12.362249   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:12.840366   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:12.849230   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:13.333543   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:13.340524   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:13.965430   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:13.965585   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:14.354004   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:14.376525   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:14.833545   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:14.843541   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:15.366288   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:15.368944   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:15.841170   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:15.845248   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:16.342332   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:16.349164   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:16.834596   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:16.847026   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:17.345073   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:17.355102   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:17.835110   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:17.839071   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:18.334023   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:18.337277   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:18.832952   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:18.839601   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:19.363250   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:19.382456   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:19.864159   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:19.867717   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:20.349792   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:20.356498   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:20.878162   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:20.880972   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:21.340948   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:21.354279   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:21.873508   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:21.921313   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:22.383708   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:22.386982   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:22.881680   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:22.895588   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:23.469177   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:23.469661   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:23.832928   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:23.839105   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:24.349533   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:24.352326   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:24.837638   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:24.847357   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:25.334093   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:25.339092   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:25.869685   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:25.873003   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:26.373739   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:26.376780   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:26.838194   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:26.846160   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:27.340485   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:27.344513   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:27.833289   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:27.856523   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:28.358759   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:28.359862   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:28.864939   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:28.869108   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:29.339928   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:29.348006   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:29.896299   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:29.908634   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:30.372334   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:30.377115   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:30.844342   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:30.876968   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:31.334505   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:31.342542   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:31.844204   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:31.847610   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:32.358253   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:32.361038   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:32.915862   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:32.917511   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:33.384528   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:33.419347   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:33.846687   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:33.891659   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:34.408627   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:34.410577   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:34.848290   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:34.848949   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:35.337969   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:35.343374   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:35.853466   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:35.853886   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:36.337954   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:36.347897   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:36.831720   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:36.838040   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:37.340281   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:37.347709   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:37.855496   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:37.860240   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:38.341181   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:38.349659   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:38.875524   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:38.902936   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:39.356499   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:39.459946   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:39.871696   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:39.878903   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:40.409357   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:40.412029   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:40.863988   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:40.864466   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:41.396587   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:41.400377   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:41.897185   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:41.915623   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:42.335705   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:42.349070   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:43.308040   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:43.336963   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:43.516434   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:43.516495   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:43.888510   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:43.889323   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:44.346794   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:44.348904   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:44.890648   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:44.912026   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:45.337345   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:45.343562   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:45.834461   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:45.840709   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:46.340196   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:46.343223   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:46.837234   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:46.851857   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:47.338127   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:47.342891   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:47.985386   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:47.985859   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:48.332506   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:48.340282   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:48.863219   69940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:48:48.865849   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:49.348678   69940 kapi.go:107] duration metric: took 1m20.022596727s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0912 21:48:49.352744   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:49.847370   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:50.343745   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:50.844397   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:51.341527   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:51.885138   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:52.370797   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:52.838083   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:53.344804   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:53.839805   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:54.365716   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:54.840204   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:55.345739   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:55.841982   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:56.346862   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:56.846779   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:57.339284   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:57.840462   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:58.338320   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:58.840341   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:48:59.341618   69940 kapi.go:107] duration metric: took 1m22.51138833s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0912 21:49:02.681560   69940 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0912 21:49:02.681596   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:03.180796   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:03.680304   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:04.187136   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:04.680145   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:05.183753   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:05.679814   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:06.181708   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:06.680435   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:07.181294   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:07.681947   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:08.180875   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:08.682374   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:09.179711   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:09.679769   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:10.180767   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:10.679862   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:11.180603   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:11.680792   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:12.180962   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:12.681368   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:13.180440   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:13.681810   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:14.180802   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:14.679794   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:15.179600   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:15.680814   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:16.181215   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:16.681273   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:17.180156   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:17.680284   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:18.182639   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:18.680544   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:19.182444   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:19.680525   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:20.181735   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:20.680984   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:21.180490   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:21.680684   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:22.181806   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:22.681187   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:23.181579   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:23.680972   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:24.180945   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:24.680600   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:25.180256   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:25.681395   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:26.181785   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:26.680511   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:27.180628   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:27.681970   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:28.184760   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:28.682773   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:29.180868   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:29.679939   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:30.180469   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:30.681287   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:31.181175   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:31.680027   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:32.180786   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:32.679772   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:33.181953   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:33.680175   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:34.182304   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:34.681356   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:35.180612   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:35.680760   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:36.182405   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:36.680943   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:37.180667   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:37.681004   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:38.187513   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:38.682010   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:39.181521   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:39.681364   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:40.181138   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:40.681040   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:41.179759   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:41.679997   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:42.181448   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:42.681181   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:43.182220   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:43.681089   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:44.184252   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:44.689166   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:45.186799   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:45.685242   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:46.181045   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:46.680807   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:47.180210   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:47.680829   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:48.180771   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:48.680530   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:49.180222   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:49.680014   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:50.181584   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:50.683729   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:51.181163   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:51.712333   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:52.181025   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:52.680155   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:53.180492   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:53.680716   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:54.180607   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:54.680830   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:55.180760   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:55.680589   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:56.180003   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:56.679918   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:57.181386   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:57.680913   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:58.187803   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:58.680782   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:59.182043   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:49:59.680570   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:00.181323   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:00.679811   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:01.179628   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:01.681395   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:02.190875   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:02.680916   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:03.180685   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:03.680704   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:04.186471   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:04.680946   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:05.180582   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:05.679485   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:06.182166   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:06.688006   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:07.181138   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:07.681345   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:08.182874   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:08.683536   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:09.187135   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:09.717830   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:10.181111   69940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:50:10.681221   69940 kapi.go:107] duration metric: took 2m30.507191094s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0912 21:50:10.684713   69940 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-331995 cluster.
	I0912 21:50:10.687552   69940 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0912 21:50:10.690283   69940 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0912 21:50:10.693168   69940 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner-rancher, storage-provisioner, default-storageclass, volcano, metrics-server, helm-tiller, inspektor-gadget, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0912 21:50:10.696159   69940 addons.go:510] duration metric: took 3m8.09271824s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner-rancher storage-provisioner default-storageclass volcano metrics-server helm-tiller inspektor-gadget yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0912 21:50:10.696264   69940 start.go:246] waiting for cluster config update ...
	I0912 21:50:10.696364   69940 start.go:255] writing updated cluster config ...
	I0912 21:50:10.697014   69940 ssh_runner.go:195] Run: rm -f paused
	I0912 21:50:11.234534   69940 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 21:50:11.237872   69940 out.go:177] * Done! kubectl is now configured to use "addons-331995" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 12 22:02:28 addons-331995 dockerd[1164]: time="2024-09-12T22:02:28.767155560Z" level=info msg="ignoring event" container=7269db6cfe12343a4052572df6d32cc5d0152dda46483e2c58471619b4c40390 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 22:02:28 addons-331995 dockerd[1164]: time="2024-09-12T22:02:28.785150137Z" level=info msg="ignoring event" container=e457284d20950b112cf7b2e708697c30c5e8fc9e3328261f5e5ada68fca3d3d6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 22:02:29 addons-331995 dockerd[1164]: time="2024-09-12T22:02:29.035308592Z" level=info msg="ignoring event" container=b1a811d4b1298cc494b84776c67faddc8e390d64cd5c784c7e2cf0789c93b05d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 22:02:29 addons-331995 dockerd[1164]: time="2024-09-12T22:02:29.221045975Z" level=info msg="ignoring event" container=cd5cf7c706c63e418af8c551f95d1fb7712900f7148066e334af56e2073fb295 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 22:02:29 addons-331995 dockerd[1164]: time="2024-09-12T22:02:29.318517581Z" level=info msg="ignoring event" container=a944326c09099f5942d2390bf032ec5cd2162092e9a910572fc72f219499d3a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 22:02:35 addons-331995 dockerd[1164]: time="2024-09-12T22:02:35.153840921Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 12 22:02:35 addons-331995 dockerd[1164]: time="2024-09-12T22:02:35.157035761Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 12 22:02:35 addons-331995 dockerd[1164]: time="2024-09-12T22:02:35.830695175Z" level=info msg="ignoring event" container=7e2523ea2f227dcb8b6ea9be62b3e09ec98364e865f4b8ac8e90be3f28a0dd3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 22:02:35 addons-331995 dockerd[1164]: time="2024-09-12T22:02:35.853766274Z" level=info msg="ignoring event" container=518419862f565bd6b41c32a3c24d3dc06b13d59cfecbb32baba3d053ad085b19 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 22:02:36 addons-331995 dockerd[1164]: time="2024-09-12T22:02:36.114954739Z" level=info msg="ignoring event" container=279564e701505aa4068fe08c68c3af3f4061c3c0ffd683680e1bdae89832e6fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 22:02:36 addons-331995 dockerd[1164]: time="2024-09-12T22:02:36.147998509Z" level=info msg="ignoring event" container=ad6f30706accb582f6e4fa9ad72c18bb51b920dcd5b8e4c93ec5aab057cefccb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 22:02:40 addons-331995 cri-dockerd[1419]: time="2024-09-12T22:02:40Z" level=error msg="error getting RW layer size for container ID '7e2523ea2f227dcb8b6ea9be62b3e09ec98364e865f4b8ac8e90be3f28a0dd3e': Error response from daemon: No such container: 7e2523ea2f227dcb8b6ea9be62b3e09ec98364e865f4b8ac8e90be3f28a0dd3e"
	Sep 12 22:02:40 addons-331995 cri-dockerd[1419]: time="2024-09-12T22:02:40Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7e2523ea2f227dcb8b6ea9be62b3e09ec98364e865f4b8ac8e90be3f28a0dd3e'"
	Sep 12 22:02:42 addons-331995 cri-dockerd[1419]: time="2024-09-12T22:02:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4d6751a9f4095b29926d8355a958e3bf2b0331cdae84830022106ae0b8e36db2/resolv.conf as [nameserver 10.96.0.10 search kube-system.svc.cluster.local svc.cluster.local cluster.local us-east1-b.c.p79a29526b6c1e63c-tp.internal c.p79a29526b6c1e63c-tp.internal google.internal options ndots:5]"
	Sep 12 22:02:44 addons-331995 cri-dockerd[1419]: time="2024-09-12T22:02:44Z" level=info msg="Stop pulling image docker.io/alpine/helm:2.16.3: Status: Downloaded newer image for alpine/helm:2.16.3"
	Sep 12 22:02:45 addons-331995 dockerd[1164]: time="2024-09-12T22:02:45.218224893Z" level=info msg="ignoring event" container=cd62e50edc9cae300859c9929530edec70562f82daf96cfe4353749dfb3dd563 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 22:02:45 addons-331995 dockerd[1164]: time="2024-09-12T22:02:45.251834846Z" level=warning msg="failed to close stdin: NotFound: task cd62e50edc9cae300859c9929530edec70562f82daf96cfe4353749dfb3dd563 not found: not found"
	Sep 12 22:02:46 addons-331995 dockerd[1164]: time="2024-09-12T22:02:46.780296981Z" level=info msg="ignoring event" container=4d6751a9f4095b29926d8355a958e3bf2b0331cdae84830022106ae0b8e36db2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 22:02:47 addons-331995 dockerd[1164]: time="2024-09-12T22:02:47.618331622Z" level=info msg="ignoring event" container=1c867a7319b3006dd7ecce88369b4f7a11582637ab5688f3627a4d71278198f1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 22:02:47 addons-331995 dockerd[1164]: time="2024-09-12T22:02:47.864345662Z" level=info msg="ignoring event" container=6a92cecac9cb80c1bed39473be0c2d3d78d1dbdb1de515bbedaa46fd259a3f30 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 22:02:52 addons-331995 dockerd[1164]: time="2024-09-12T22:02:52.507701110Z" level=info msg="ignoring event" container=2d4aa333237a10f4ab1ab5eb65c014914912290b13dcc2d7d66e72a6daf91ece module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 22:02:53 addons-331995 dockerd[1164]: time="2024-09-12T22:02:53.576193460Z" level=info msg="ignoring event" container=2c66e67b210eb18db7f5eff8dccf1f45f8a91a74d60aea958ae967cab4dcffa9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 22:02:54 addons-331995 dockerd[1164]: time="2024-09-12T22:02:54.079031184Z" level=info msg="ignoring event" container=15a61fdf28b5ad3b248f7a9f36b3668a36763b6e2c1dea9241d706a88e5e10e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 22:02:54 addons-331995 dockerd[1164]: time="2024-09-12T22:02:54.187203890Z" level=info msg="ignoring event" container=c0fe77c8ba99b1f44688469af0b3976e4fc745173051aca7c322ed9ec0fffa4f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 22:02:54 addons-331995 dockerd[1164]: time="2024-09-12T22:02:54.576730755Z" level=info msg="ignoring event" container=de6b8c54852a35cb4d35a0e058912001cc14d700bac9b58061e1184553c4b788 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	cd62e50edc9ca       alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                          12 seconds ago      Exited              helm-test                  0                   4d6751a9f4095       helm-test
	a733f43413cb0       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            3 minutes ago       Exited              gadget                     7                   f942917293ca1       gadget-qxkpk
	cd2d86123e208       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 12 minutes ago      Running             gcp-auth                   0                   ad8b4b78fc3c3       gcp-auth-89d5ffd79-zc25n
	d7956ca6ad484       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             14 minutes ago      Running             controller                 0                   1baabe6295ccb       ingress-nginx-controller-bc57996ff-ghbdz
	f69d1097d49de       volcanosh/vc-webhook-manager@sha256:31e8c7adc6859e582b8edd053e2e926409bcfd1bf39e3a10d05949f7738144c4                         14 minutes ago      Running             admission                  0                   cbf92391595a3       volcano-admission-77d7d48b68-86lvv
	36ea06f7c3982       volcanosh/vc-scheduler@sha256:1ebc36090a981cb8bd703f9e9842f8e0a53ef6bf9034d51defc1ea689f38a60f                               14 minutes ago      Running             volcano-scheduler          0                   499b7c798ffe6       volcano-scheduler-576bc46687-x2rvr
	4c84e0633820e       ce263a8653f9c                                                                                                                14 minutes ago      Exited              patch                      1                   2ddb9c21b6388       ingress-nginx-admission-patch-vjgh7
	2b07f863c83cf       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   14 minutes ago      Exited              create                     0                   b46d1a8ec1c88       ingress-nginx-admission-create-62fw7
	3eab7e4cc8a0b       volcanosh/vc-controller-manager@sha256:d1337c3af008318577ca718a7f35b75cefc1071a35749c4f9430035abd4fbc93                      14 minutes ago      Running             volcano-controllers        0                   9d35c921c6266       volcano-controllers-56675bb4d5-2xcbp
	24994544e183d       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        14 minutes ago      Running             yakd                       0                   e086acc613350       yakd-dashboard-67d98fc6b-xqz4j
	93726d79c636f       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       14 minutes ago      Running             local-path-provisioner     0                   cddc341984351       local-path-provisioner-86d989889c-hn7cz
	872989ed73e1e       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        15 minutes ago      Running             metrics-server             0                   068137373fe7b       metrics-server-84c5f94fbc-qj8c7
	30fad740af987       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             15 minutes ago      Running             minikube-ingress-dns       0                   238cdd11405a9       kube-ingress-dns-minikube
	b71da04555c07       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               15 minutes ago      Running             cloud-spanner-emulator     0                   450b1183c12b9       cloud-spanner-emulator-769b77f747-2cg5r
	046f859adeec7       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     15 minutes ago      Running             nvidia-device-plugin-ctr   0                   9570cbdbb7023       nvidia-device-plugin-daemonset-4sqcf
	f30d0ccccc620       6e38f40d628db                                                                                                                15 minutes ago      Running             storage-provisioner        0                   ecc35d17bf51e       storage-provisioner
	868a9e54f79bc       c69fa2e9cbf5f                                                                                                                15 minutes ago      Running             coredns                    0                   ff0c0e5a01423       coredns-7c65d6cfc9-6p998
	445dc44b267e6       c69fa2e9cbf5f                                                                                                                15 minutes ago      Running             coredns                    0                   5e8404ae6fe81       coredns-7c65d6cfc9-vhwzq
	7af155869329b       60c005f310ff3                                                                                                                15 minutes ago      Running             kube-proxy                 0                   d34ada0abb6ee       kube-proxy-9slnj
	b880a4debed4b       6bab7719df100                                                                                                                16 minutes ago      Running             kube-apiserver             0                   5abe04d9f008b       kube-apiserver-addons-331995
	f9f9ebe55863b       2e96e5913fc06                                                                                                                16 minutes ago      Running             etcd                       0                   1a319d26cdb5b       etcd-addons-331995
	29fcb0173534b       175ffd71cce3d                                                                                                                16 minutes ago      Running             kube-controller-manager    0                   c1ddcceb5e23b       kube-controller-manager-addons-331995
	86c19ec99efff       9aa1fad941575                                                                                                                16 minutes ago      Running             kube-scheduler             0                   6ff136ae32ba3       kube-scheduler-addons-331995
	
	
	==> controller_ingress [d7956ca6ad48] <==
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	W0912 21:48:48.797338       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0912 21:48:48.797927       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0912 21:48:48.816437       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/amd64"
	I0912 21:48:49.543865       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0912 21:48:49.617987       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0912 21:48:49.655358       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0912 21:48:49.776800       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"6abed3cf-112e-49b8-bc6e-393ac8803cf8", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0912 21:48:49.782660       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"16fc4b97-9f7e-4b9b-ad0c-aae8bfb964d7", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0912 21:48:49.782963       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"aa7fa93d-6d21-4683-998d-ae89c6b2aa34", APIVersion:"v1", ResourceVersion:"696", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0912 21:48:50.907379       7 nginx.go:317] "Starting NGINX process"
	I0912 21:48:50.926313       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0912 21:48:50.939770       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0912 21:48:50.943487       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0912 21:48:50.970788       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0912 21:48:50.971155       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-ghbdz"
	I0912 21:48:51.049656       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-ghbdz" node="addons-331995"
	I0912 21:48:51.087220       7 controller.go:213] "Backend successfully reloaded"
	I0912 21:48:51.087501       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0912 21:48:51.087926       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-ghbdz", UID:"b658f2d0-8e52-4414-bf8c-81bcbd9a15bd", APIVersion:"v1", ResourceVersion:"1256", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [445dc44b267e] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[569963566]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (12-Sep-2024 21:47:09.877) (total time: 30028ms):
	Trace[569963566]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30018ms (21:47:39.895)
	Trace[569963566]: [30.02820962s] [30.02820962s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[961821547]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (12-Sep-2024 21:47:09.881) (total time: 30024ms):
	Trace[961821547]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30011ms (21:47:39.892)
	Trace[961821547]: [30.02467921s] [30.02467921s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.8:36998 - 22539 "A IN registry.kube-system.svc.cluster.local.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 99 false 512" NXDOMAIN qr,rd,ra 206 0.011019646s
	[INFO] 10.244.0.8:36998 - 26636 "AAAA IN registry.kube-system.svc.cluster.local.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 99 false 512" NXDOMAIN qr,rd,ra 206 0.020344759s
	[INFO] 10.244.0.8:34022 - 38422 "AAAA IN registry.kube-system.svc.cluster.local.c.p79a29526b6c1e63c-tp.internal. udp 88 false 512" NXDOMAIN qr,rd,ra 193 0.003910944s
	[INFO] 10.244.0.8:34022 - 48687 "A IN registry.kube-system.svc.cluster.local.c.p79a29526b6c1e63c-tp.internal. udp 88 false 512" NXDOMAIN qr,rd,ra 193 0.004250639s
	[INFO] 10.244.0.8:47605 - 39720 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000762771s
	[INFO] 10.244.0.8:47605 - 56878 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000734464s
	[INFO] 10.244.0.8:44780 - 46343 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000385778s
	[INFO] 10.244.0.8:44780 - 38669 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.001049043s
	[INFO] 10.244.0.8:40664 - 45404 "A IN registry.kube-system.svc.cluster.local.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 99 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000126875s
	[INFO] 10.244.0.8:40664 - 357 "AAAA IN registry.kube-system.svc.cluster.local.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 99 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000418748s
	[INFO] 10.244.0.26:49133 - 39508 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.003399423s
	[INFO] 10.244.0.26:38609 - 27818 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000231503s
	[INFO] 10.244.0.26:47855 - 40545 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000157429s
	[INFO] 10.244.0.26:59443 - 65496 "A IN storage.googleapis.com.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 190 0.00591248s
	[INFO] 10.244.0.26:41307 - 61273 "AAAA IN storage.googleapis.com.c.p79a29526b6c1e63c-tp.internal. udp 83 false 1232" NXDOMAIN qr,rd,ra 177 0.004982528s
	[INFO] 10.244.0.26:33650 - 48868 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004500354s
	
	
	==> coredns [868a9e54f79b] <==
	[INFO] 10.244.0.8:42231 - 34404 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000194802s
	[INFO] 10.244.0.8:59617 - 56571 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000151976s
	[INFO] 10.244.0.8:59617 - 34544 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000106499s
	[INFO] 10.244.0.8:48323 - 25306 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000171133s
	[INFO] 10.244.0.8:48323 - 63966 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000122441s
	[INFO] 10.244.0.8:42010 - 50345 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 177 0.004193851s
	[INFO] 10.244.0.8:42010 - 11431 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 177 0.006814802s
	[INFO] 10.244.0.8:54934 - 64830 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000245648s
	[INFO] 10.244.0.8:54934 - 11268 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000191038s
	[INFO] 10.244.0.8:34147 - 59879 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000265517s
	[INFO] 10.244.0.8:34147 - 4860 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000308605s
	[INFO] 10.244.0.8:41581 - 27020 "AAAA IN registry.kube-system.svc.cluster.local.c.p79a29526b6c1e63c-tp.internal. udp 88 false 512" NXDOMAIN qr,rd,ra 193 0.009223986s
	[INFO] 10.244.0.8:41581 - 25227 "A IN registry.kube-system.svc.cluster.local.c.p79a29526b6c1e63c-tp.internal. udp 88 false 512" NXDOMAIN qr,rd,ra 193 0.009824796s
	[INFO] 10.244.0.8:40590 - 10991 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000137334s
	[INFO] 10.244.0.8:40590 - 18666 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00016705s
	[INFO] 10.244.0.8:36099 - 12503 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000221759s
	[INFO] 10.244.0.8:36099 - 13011 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000436864s
	[INFO] 10.244.0.26:42816 - 4561 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000404391s
	[INFO] 10.244.0.26:34101 - 17948 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000201728s
	[INFO] 10.244.0.26:38541 - 13689 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00020108s
	[INFO] 10.244.0.26:54684 - 59487 "AAAA IN storage.googleapis.com.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 190 0.005249268s
	[INFO] 10.244.0.26:60239 - 56905 "A IN storage.googleapis.com.c.p79a29526b6c1e63c-tp.internal. udp 83 false 1232" NXDOMAIN qr,rd,ra 177 0.003733364s
	[INFO] 10.244.0.26:40520 - 60082 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005744176s
	[INFO] 10.244.0.26:36108 - 1839 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.003018103s
	[INFO] 10.244.0.26:52133 - 17789 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003163565s
	
	
	==> describe nodes <==
	Name:               addons-331995
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-331995
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=addons-331995
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T21_46_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-331995
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:46:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-331995
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:02:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 22:02:08 +0000   Thu, 12 Sep 2024 21:46:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 22:02:08 +0000   Thu, 12 Sep 2024 21:46:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 22:02:08 +0000   Thu, 12 Sep 2024 21:46:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 22:02:08 +0000   Thu, 12 Sep 2024 21:46:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-331995
	Capacity:
	  cpu:                2
	  ephemeral-storage:  119475748Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             8141780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  119475748Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             8141780Ki
	  pods:               110
	System Info:
	  Machine ID:                 af769b9e892649a9a66756768ebde624
	  System UUID:                23a3c713-b4af-4c64-a638-7904dc1f2582
	  Boot ID:                    8d817c15-e3fc-48f0-8b3e-6ea4899766ef
	  Kernel Version:             6.1.100+
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m20s
	  default                     cloud-spanner-emulator-769b77f747-2cg5r     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  gadget                      gadget-qxkpk                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  gcp-auth                    gcp-auth-89d5ffd79-zc25n                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-ghbdz    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         15m
	  kube-system                 coredns-7c65d6cfc9-6p998                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 coredns-7c65d6cfc9-vhwzq                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 etcd-addons-331995                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-331995                250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-331995       200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-9slnj                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-331995                100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-84c5f94fbc-qj8c7             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         15m
	  kube-system                 nvidia-device-plugin-daemonset-4sqcf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  local-path-storage          local-path-provisioner-86d989889c-hn7cz     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  volcano-system              volcano-admission-77d7d48b68-86lvv          0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  volcano-system              volcano-controllers-56675bb4d5-2xcbp        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  volcano-system              volcano-scheduler-576bc46687-x2rvr          0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-xqz4j              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  0 (0%)
	  memory             658Mi (8%)   596Mi (7%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node addons-331995 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node addons-331995 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node addons-331995 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node addons-331995 event: Registered Node addons-331995 in Controller
	
	
	==> dmesg <==
	[  +3.148428] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 4a e9 56 e6 7d b7 08 06
	[  +7.499062] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 46 77 14 78 1e 58 08 06
	[  +7.560044] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff da 39 6b 69 f9 1a 08 06
	[  +4.103333] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 86 c0 55 13 a6 d7 08 06
	[  +0.158935] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 96 36 9d 3e c3 22 08 06
	[  +0.112368] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 55 e2 1d 59 35 08 06
	[Sep12 21:49] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 88 11 36 c1 00 08 06
	[  +0.074484] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ee f8 ca 58 92 32 08 06
	[Sep12 21:50] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 96 23 3b c0 b6 ea 08 06
	[  +0.001215] IPv4: martian source 10.244.0.26 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa 0c 72 97 a0 30 08 06
	[  +0.000731] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be b1 f1 17 08 40 08 06
	[Sep12 21:53] hrtimer: interrupt took 1181010 ns
	[Sep12 22:02] IPv4: martian source 10.244.0.1 from 10.244.0.31, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 66 0d 5b 04 25 08 06
	
	
	==> etcd [f9f9ebe55863] <==
	{"level":"info","ts":"2024-09-12T21:48:43.497686Z","caller":"traceutil/trace.go:171","msg":"trace[1654477385] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1231; }","duration":"145.191361ms","start":"2024-09-12T21:48:43.352480Z","end":"2024-09-12T21:48:43.497672Z","steps":["trace[1654477385] 'agreement among raft nodes before linearized reading'  (duration: 140.262687ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:48:43.492918Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.904378ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gcp-auth/gcp-auth-certs-patch.17f49d6fa8e6529d\" ","response":"range_response_count:1 size:913"}
	{"level":"info","ts":"2024-09-12T21:48:43.498107Z","caller":"traceutil/trace.go:171","msg":"trace[1941988053] range","detail":"{range_begin:/registry/events/gcp-auth/gcp-auth-certs-patch.17f49d6fa8e6529d; range_end:; response_count:1; response_revision:1231; }","duration":"137.088353ms","start":"2024-09-12T21:48:43.361003Z","end":"2024-09-12T21:48:43.498091Z","steps":["trace[1941988053] 'agreement among raft nodes before linearized reading'  (duration: 131.839143ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:48:44.828372Z","caller":"traceutil/trace.go:171","msg":"trace[1255160325] transaction","detail":"{read_only:false; response_revision:1237; number_of_response:1; }","duration":"130.254122ms","start":"2024-09-12T21:48:44.698094Z","end":"2024-09-12T21:48:44.828348Z","steps":["trace[1255160325] 'process raft request'  (duration: 130.10318ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:48:47.973672Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.626207ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:48:47.973782Z","caller":"traceutil/trace.go:171","msg":"trace[1103384014] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1249; }","duration":"144.712603ms","start":"2024-09-12T21:48:47.829016Z","end":"2024-09-12T21:48:47.973729Z","steps":["trace[1103384014] 'range keys from in-memory index tree'  (duration: 144.526239ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:48:47.973962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.814097ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:48:47.973990Z","caller":"traceutil/trace.go:171","msg":"trace[1223651583] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1249; }","duration":"140.844888ms","start":"2024-09-12T21:48:47.833137Z","end":"2024-09-12T21:48:47.973981Z","steps":["trace[1223651583] 'range keys from in-memory index tree'  (duration: 140.742495ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:48:48.308663Z","caller":"traceutil/trace.go:171","msg":"trace[860059594] linearizableReadLoop","detail":"{readStateIndex:1286; appliedIndex:1285; }","duration":"129.00134ms","start":"2024-09-12T21:48:48.179636Z","end":"2024-09-12T21:48:48.308637Z","steps":["trace[860059594] 'read index received'  (duration: 128.70953ms)","trace[860059594] 'applied index is now lower than readState.Index'  (duration: 290.355µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-12T21:48:48.310468Z","caller":"traceutil/trace.go:171","msg":"trace[473668203] transaction","detail":"{read_only:false; response_revision:1250; number_of_response:1; }","duration":"318.962413ms","start":"2024-09-12T21:48:47.991450Z","end":"2024-09-12T21:48:48.310413Z","steps":["trace[473668203] 'process raft request'  (duration: 316.95513ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:48:48.310664Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.968423ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T21:48:48.311892Z","caller":"traceutil/trace.go:171","msg":"trace[46603671] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1250; }","duration":"132.243511ms","start":"2024-09-12T21:48:48.179629Z","end":"2024-09-12T21:48:48.311872Z","steps":["trace[46603671] 'agreement among raft nodes before linearized reading'  (duration: 130.934072ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:48:48.313904Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-12T21:48:47.991426Z","time spent":"320.358775ms","remote":"127.0.0.1:44968","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3358,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/validatingwebhookconfigurations/volcano-admission-service-queues-validate\" mod_revision:882 > success:<request_put:<key:\"/registry/validatingwebhookconfigurations/volcano-admission-service-queues-validate\" value_size:3267 >> failure:<request_range:<key:\"/registry/validatingwebhookconfigurations/volcano-admission-service-queues-validate\" > >"}
	{"level":"info","ts":"2024-09-12T21:56:52.753106Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1687}
	{"level":"info","ts":"2024-09-12T21:56:52.836869Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1687,"took":"82.714391ms","hash":1368629540,"current-db-size-bytes":8531968,"current-db-size":"8.5 MB","current-db-size-in-use-bytes":4395008,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2024-09-12T21:56:52.836930Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1368629540,"revision":1687,"compact-revision":-1}
	{"level":"warn","ts":"2024-09-12T22:01:50.712694Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.05773ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T22:01:50.712780Z","caller":"traceutil/trace.go:171","msg":"trace[1543258862] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2567; }","duration":"232.176923ms","start":"2024-09-12T22:01:50.480585Z","end":"2024-09-12T22:01:50.712762Z","steps":["trace[1543258862] 'range keys from in-memory index tree'  (duration: 232.03712ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T22:01:50.712998Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.753849ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-12T22:01:50.713030Z","caller":"traceutil/trace.go:171","msg":"trace[1025847475] range","detail":"{range_begin:/registry/cronjobs/; range_end:/registry/cronjobs0; response_count:0; response_revision:2567; }","duration":"143.796644ms","start":"2024-09-12T22:01:50.569223Z","end":"2024-09-12T22:01:50.713020Z","steps":["trace[1025847475] 'count revisions from in-memory index tree'  (duration: 143.657905ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T22:01:50.978861Z","caller":"traceutil/trace.go:171","msg":"trace[902396706] transaction","detail":"{read_only:false; response_revision:2568; number_of_response:1; }","duration":"254.115027ms","start":"2024-09-12T22:01:50.724723Z","end":"2024-09-12T22:01:50.978838Z","steps":["trace[902396706] 'process raft request'  (duration: 253.669265ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T22:01:51.169455Z","caller":"traceutil/trace.go:171","msg":"trace[994144618] transaction","detail":"{read_only:false; response_revision:2569; number_of_response:1; }","duration":"165.22646ms","start":"2024-09-12T22:01:51.004204Z","end":"2024-09-12T22:01:51.169430Z","steps":["trace[994144618] 'process raft request'  (duration: 159.136737ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T22:01:52.794561Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2142}
	{"level":"info","ts":"2024-09-12T22:01:52.875586Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2142,"took":"79.794038ms","hash":2790691660,"current-db-size-bytes":8531968,"current-db-size":"8.5 MB","current-db-size-in-use-bytes":3260416,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-12T22:01:52.875656Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2790691660,"revision":2142,"compact-revision":1687}
	
	
	==> gcp-auth [cd2d86123e20] <==
	2024/09/12 21:50:09 GCP Auth Webhook started!
	2024/09/12 21:50:29 Ready to marshal response ...
	2024/09/12 21:50:29 Ready to write response ...
	2024/09/12 21:50:30 Ready to marshal response ...
	2024/09/12 21:50:30 Ready to write response ...
	2024/09/12 21:53:35 Ready to marshal response ...
	2024/09/12 21:53:35 Ready to write response ...
	2024/09/12 21:53:36 Ready to marshal response ...
	2024/09/12 21:53:36 Ready to write response ...
	2024/09/12 21:53:36 Ready to marshal response ...
	2024/09/12 21:53:36 Ready to write response ...
	2024/09/12 22:01:42 Ready to marshal response ...
	2024/09/12 22:01:42 Ready to write response ...
	2024/09/12 22:01:52 Ready to marshal response ...
	2024/09/12 22:01:52 Ready to write response ...
	2024/09/12 22:02:18 Ready to marshal response ...
	2024/09/12 22:02:18 Ready to write response ...
	2024/09/12 22:02:42 Ready to marshal response ...
	2024/09/12 22:02:42 Ready to write response ...
	
	
	==> kernel <==
	 22:02:57 up  1:01,  0 users,  load average: 2.57, 1.59, 1.70
	Linux addons-331995 6.1.100+ #1 SMP PREEMPT_DYNAMIC Sat Aug 17 14:12:26 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [b880a4debed4] <==
	W0912 21:48:46.696285       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.144.37:443: connect: connection refused
	W0912 21:48:47.753383       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.144.37:443: connect: connection refused
	W0912 21:49:02.281916       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.53.96:443: connect: connection refused
	E0912 21:49:02.282013       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.53.96:443: connect: connection refused" logger="UnhandledError"
	W0912 21:49:43.366734       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.53.96:443: connect: connection refused
	E0912 21:49:43.367200       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.53.96:443: connect: connection refused" logger="UnhandledError"
	W0912 21:49:43.367042       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.53.96:443: connect: connection refused
	E0912 21:49:43.367263       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.53.96:443: connect: connection refused" logger="UnhandledError"
	I0912 21:50:29.757823       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0912 21:50:29.795075       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0912 22:01:57.855595       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0912 22:02:35.293268       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 22:02:35.293603       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 22:02:35.327812       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 22:02:35.328121       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 22:02:35.407102       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 22:02:35.407508       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 22:02:35.498964       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 22:02:35.499045       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 22:02:35.599469       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 22:02:35.599531       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0912 22:02:36.500432       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0912 22:02:36.600261       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0912 22:02:36.614994       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	E0912 22:02:45.186293       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.31:51030: read: connection reset by peer
	
	
	==> kube-controller-manager [29fcb0173534] <==
	W0912 22:02:37.868927       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:02:37.868990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 22:02:37.982577       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:02:37.982642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 22:02:38.995957       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:02:38.996013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 22:02:40.161616       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:02:40.161674       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 22:02:40.202035       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:02:40.202109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 22:02:43.895756       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:02:43.896102       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 22:02:45.254877       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:02:45.254943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 22:02:46.003502       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:02:46.003567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0912 22:02:47.517120       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="11.131µs"
	I0912 22:02:53.476115       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="8.254µs"
	W0912 22:02:54.362150       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:02:54.387222       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 22:02:54.745095       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:02:54.745246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0912 22:02:55.533517       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="22.655µs"
	W0912 22:02:56.299177       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 22:02:56.299242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [7af155869329] <==
	I0912 21:47:13.680709       1 server_linux.go:66] "Using iptables proxy"
	I0912 21:47:15.421561       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0912 21:47:15.425219       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 21:47:15.916170       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0912 21:47:15.916845       1 server_linux.go:169] "Using iptables Proxier"
	I0912 21:47:15.929120       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 21:47:15.934176       1 server.go:483] "Version info" version="v1.31.1"
	I0912 21:47:15.934221       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 21:47:16.042446       1 config.go:199] "Starting service config controller"
	I0912 21:47:16.044225       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 21:47:16.048726       1 config.go:105] "Starting endpoint slice config controller"
	I0912 21:47:16.049131       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 21:47:16.071330       1 config.go:328] "Starting node config controller"
	I0912 21:47:16.075347       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 21:47:16.237394       1 shared_informer.go:320] Caches are synced for service config
	I0912 21:47:16.237740       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 21:47:16.277926       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [86c19ec99eff] <==
	W0912 21:46:55.413078       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 21:46:55.414590       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0912 21:46:56.256437       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 21:46:56.256500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:46:56.327532       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 21:46:56.327695       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:46:56.374523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 21:46:56.374892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:46:56.436424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 21:46:56.436567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:46:56.495087       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 21:46:56.497161       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:46:56.546386       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0912 21:46:56.546767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:46:56.659601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0912 21:46:56.661326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:46:56.661228       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 21:46:56.662207       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0912 21:46:56.674254       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 21:46:56.674912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:46:56.713837       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 21:46:56.714149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:46:56.738998       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 21:46:56.739498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0912 21:46:59.566151       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 12 22:02:52 addons-331995 kubelet[2185]: I0912 22:02:52.726364    2185 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqkvz\" (UniqueName: \"kubernetes.io/projected/7657348e-7d6c-4ba6-891a-4757f6f75c42-kube-api-access-mqkvz\") pod \"7657348e-7d6c-4ba6-891a-4757f6f75c42\" (UID: \"7657348e-7d6c-4ba6-891a-4757f6f75c42\") "
	Sep 12 22:02:52 addons-331995 kubelet[2185]: I0912 22:02:52.726463    2185 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7657348e-7d6c-4ba6-891a-4757f6f75c42-gcp-creds\") pod \"7657348e-7d6c-4ba6-891a-4757f6f75c42\" (UID: \"7657348e-7d6c-4ba6-891a-4757f6f75c42\") "
	Sep 12 22:02:52 addons-331995 kubelet[2185]: I0912 22:02:52.726596    2185 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7657348e-7d6c-4ba6-891a-4757f6f75c42-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "7657348e-7d6c-4ba6-891a-4757f6f75c42" (UID: "7657348e-7d6c-4ba6-891a-4757f6f75c42"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 12 22:02:52 addons-331995 kubelet[2185]: I0912 22:02:52.733251    2185 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7657348e-7d6c-4ba6-891a-4757f6f75c42-kube-api-access-mqkvz" (OuterVolumeSpecName: "kube-api-access-mqkvz") pod "7657348e-7d6c-4ba6-891a-4757f6f75c42" (UID: "7657348e-7d6c-4ba6-891a-4757f6f75c42"). InnerVolumeSpecName "kube-api-access-mqkvz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 22:02:52 addons-331995 kubelet[2185]: I0912 22:02:52.826911    2185 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7657348e-7d6c-4ba6-891a-4757f6f75c42-gcp-creds\") on node \"addons-331995\" DevicePath \"\""
	Sep 12 22:02:52 addons-331995 kubelet[2185]: I0912 22:02:52.826964    2185 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mqkvz\" (UniqueName: \"kubernetes.io/projected/7657348e-7d6c-4ba6-891a-4757f6f75c42-kube-api-access-mqkvz\") on node \"addons-331995\" DevicePath \"\""
	Sep 12 22:02:54 addons-331995 kubelet[2185]: I0912 22:02:54.142622    2185 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7657348e-7d6c-4ba6-891a-4757f6f75c42" path="/var/lib/kubelet/pods/7657348e-7d6c-4ba6-891a-4757f6f75c42/volumes"
	Sep 12 22:02:54 addons-331995 kubelet[2185]: I0912 22:02:54.551768    2185 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsp2s\" (UniqueName: \"kubernetes.io/projected/07e442c7-a078-4cde-aa3c-fad57aac4c18-kube-api-access-vsp2s\") pod \"07e442c7-a078-4cde-aa3c-fad57aac4c18\" (UID: \"07e442c7-a078-4cde-aa3c-fad57aac4c18\") "
	Sep 12 22:02:54 addons-331995 kubelet[2185]: I0912 22:02:54.569667    2185 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e442c7-a078-4cde-aa3c-fad57aac4c18-kube-api-access-vsp2s" (OuterVolumeSpecName: "kube-api-access-vsp2s") pod "07e442c7-a078-4cde-aa3c-fad57aac4c18" (UID: "07e442c7-a078-4cde-aa3c-fad57aac4c18"). InnerVolumeSpecName "kube-api-access-vsp2s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 22:02:54 addons-331995 kubelet[2185]: I0912 22:02:54.653777    2185 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vsp2s\" (UniqueName: \"kubernetes.io/projected/07e442c7-a078-4cde-aa3c-fad57aac4c18-kube-api-access-vsp2s\") on node \"addons-331995\" DevicePath \"\""
	Sep 12 22:02:54 addons-331995 kubelet[2185]: I0912 22:02:54.856013    2185 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmp4t\" (UniqueName: \"kubernetes.io/projected/c0051f59-18dc-4684-b682-f4a992ea12a2-kube-api-access-vmp4t\") pod \"c0051f59-18dc-4684-b682-f4a992ea12a2\" (UID: \"c0051f59-18dc-4684-b682-f4a992ea12a2\") "
	Sep 12 22:02:54 addons-331995 kubelet[2185]: I0912 22:02:54.860302    2185 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0051f59-18dc-4684-b682-f4a992ea12a2-kube-api-access-vmp4t" (OuterVolumeSpecName: "kube-api-access-vmp4t") pod "c0051f59-18dc-4684-b682-f4a992ea12a2" (UID: "c0051f59-18dc-4684-b682-f4a992ea12a2"). InnerVolumeSpecName "kube-api-access-vmp4t". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 22:02:54 addons-331995 kubelet[2185]: I0912 22:02:54.957385    2185 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vmp4t\" (UniqueName: \"kubernetes.io/projected/c0051f59-18dc-4684-b682-f4a992ea12a2-kube-api-access-vmp4t\") on node \"addons-331995\" DevicePath \"\""
	Sep 12 22:02:55 addons-331995 kubelet[2185]: I0912 22:02:55.143574    2185 scope.go:117] "RemoveContainer" containerID="15a61fdf28b5ad3b248f7a9f36b3668a36763b6e2c1dea9241d706a88e5e10e2"
	Sep 12 22:02:55 addons-331995 kubelet[2185]: I0912 22:02:55.218824    2185 scope.go:117] "RemoveContainer" containerID="15a61fdf28b5ad3b248f7a9f36b3668a36763b6e2c1dea9241d706a88e5e10e2"
	Sep 12 22:02:55 addons-331995 kubelet[2185]: E0912 22:02:55.225429    2185 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 15a61fdf28b5ad3b248f7a9f36b3668a36763b6e2c1dea9241d706a88e5e10e2" containerID="15a61fdf28b5ad3b248f7a9f36b3668a36763b6e2c1dea9241d706a88e5e10e2"
	Sep 12 22:02:55 addons-331995 kubelet[2185]: I0912 22:02:55.225487    2185 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"15a61fdf28b5ad3b248f7a9f36b3668a36763b6e2c1dea9241d706a88e5e10e2"} err="failed to get container status \"15a61fdf28b5ad3b248f7a9f36b3668a36763b6e2c1dea9241d706a88e5e10e2\": rpc error: code = Unknown desc = Error response from daemon: No such container: 15a61fdf28b5ad3b248f7a9f36b3668a36763b6e2c1dea9241d706a88e5e10e2"
	Sep 12 22:02:55 addons-331995 kubelet[2185]: I0912 22:02:55.225533    2185 scope.go:117] "RemoveContainer" containerID="2c66e67b210eb18db7f5eff8dccf1f45f8a91a74d60aea958ae967cab4dcffa9"
	Sep 12 22:02:56 addons-331995 kubelet[2185]: I0912 22:02:56.114630    2185 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07e442c7-a078-4cde-aa3c-fad57aac4c18" path="/var/lib/kubelet/pods/07e442c7-a078-4cde-aa3c-fad57aac4c18/volumes"
	Sep 12 22:02:56 addons-331995 kubelet[2185]: I0912 22:02:56.115827    2185 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0051f59-18dc-4684-b682-f4a992ea12a2" path="/var/lib/kubelet/pods/c0051f59-18dc-4684-b682-f4a992ea12a2/volumes"
	Sep 12 22:02:57 addons-331995 kubelet[2185]: I0912 22:02:57.280433    2185 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkzpt\" (UniqueName: \"kubernetes.io/projected/10575e3b-51e3-4a17-9911-8ed2245ed9c6-kube-api-access-xkzpt\") pod \"10575e3b-51e3-4a17-9911-8ed2245ed9c6\" (UID: \"10575e3b-51e3-4a17-9911-8ed2245ed9c6\") "
	Sep 12 22:02:57 addons-331995 kubelet[2185]: I0912 22:02:57.280545    2185 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/10575e3b-51e3-4a17-9911-8ed2245ed9c6-tmp-dir\") pod \"10575e3b-51e3-4a17-9911-8ed2245ed9c6\" (UID: \"10575e3b-51e3-4a17-9911-8ed2245ed9c6\") "
	Sep 12 22:02:57 addons-331995 kubelet[2185]: I0912 22:02:57.281481    2185 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10575e3b-51e3-4a17-9911-8ed2245ed9c6-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "10575e3b-51e3-4a17-9911-8ed2245ed9c6" (UID: "10575e3b-51e3-4a17-9911-8ed2245ed9c6"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 12 22:02:57 addons-331995 kubelet[2185]: I0912 22:02:57.292718    2185 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10575e3b-51e3-4a17-9911-8ed2245ed9c6-kube-api-access-xkzpt" (OuterVolumeSpecName: "kube-api-access-xkzpt") pod "10575e3b-51e3-4a17-9911-8ed2245ed9c6" (UID: "10575e3b-51e3-4a17-9911-8ed2245ed9c6"). InnerVolumeSpecName "kube-api-access-xkzpt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 22:02:57 addons-331995 kubelet[2185]: I0912 22:02:57.325015    2185 scope.go:117] "RemoveContainer" containerID="872989ed73e1ed61abcf68b301a975b3ce6d433a4b642221b9059b44b8aead1f"
	
	
	==> storage-provisioner [f30d0ccccc62] <==
	I0912 21:47:21.810825       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 21:47:22.274647       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 21:47:22.275065       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 21:47:22.690583       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 21:47:22.691995       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-331995_452362f9-16f2-495f-ae00-4487175040a7!
	I0912 21:47:22.763502       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ded5eb6c-7290-49d3-bbc7-91676b62b5b7", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-331995_452362f9-16f2-495f-ae00-4487175040a7 became leader
	I0912 21:47:23.306677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-331995_452362f9-16f2-495f-ae00-4487175040a7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-331995 -n addons-331995
helpers_test.go:261: (dbg) Run:  kubectl --context addons-331995 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-62fw7 ingress-nginx-admission-patch-vjgh7 test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-331995 describe pod busybox ingress-nginx-admission-create-62fw7 ingress-nginx-admission-patch-vjgh7 test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-331995 describe pod busybox ingress-nginx-admission-create-62fw7 ingress-nginx-admission-patch-vjgh7 test-job-nginx-0: exit status 1 (133.967709ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-331995/192.168.49.2
	Start Time:       Thu, 12 Sep 2024 21:53:36 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-27c8k (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-27c8k:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m22s                   default-scheduler  Successfully assigned default/busybox to addons-331995
	  Normal   Pulling    7m53s (x4 over 9m21s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m53s (x4 over 9m21s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m53s (x4 over 9m21s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m38s (x6 over 9m21s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m13s (x21 over 9m21s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-62fw7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vjgh7" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-331995 describe pod busybox ingress-nginx-admission-create-62fw7 ingress-nginx-admission-patch-vjgh7 test-job-nginx-0: exit status 1
--- FAIL: TestAddons/parallel/Registry (77.86s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (180.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
2024/09/12 22:09:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
E0912 22:10:11.513188   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Non-zero exit: kubectl --context functional-168025 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}: context deadline exceeded (1.34µs)
functional_test_tunnel_test.go:245: nginx-svc svc.status.loadBalancer.ingress never got an IP: context deadline exceeded
functional_test_tunnel_test.go:246: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc
functional_test_tunnel_test.go:250: failed to kubectl get svc nginx-svc:

                                                
                                                
-- stdout --
	NAME        TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
	nginx-svc   LoadBalancer   10.97.78.146   <pending>     80:30779/TCP   3m10s

                                                
                                                
-- /stdout --
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (180.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (14.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-168025 /tmp/TestFunctionalparallelMountCmdany-port1526128711/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726178927386445112" to /tmp/TestFunctionalparallelMountCmdany-port1526128711/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726178927386445112" to /tmp/TestFunctionalparallelMountCmdany-port1526128711/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726178927386445112" to /tmp/TestFunctionalparallelMountCmdany-port1526128711/001/test-1726178927386445112
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (641.737058ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (376.105115ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (397.833251ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (387.669714ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (379.154577ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (410.488829ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (380.414269ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:125: /mount-9p did not appear within 13.38505958s: exit status 1
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (390.78563ms)

                                                
                                                
-- stdout --
	ls: cannot access '/mount-9p': No such file or directory
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-168025 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "sudo umount -f /mount-9p": exit status 1 (422.189748ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: no mount point specified.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:92: "out/minikube-linux-amd64 -p functional-168025 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-168025 /tmp/TestFunctionalparallelMountCmdany-port1526128711/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-168025 /tmp/TestFunctionalparallelMountCmdany-port1526128711/001:/mount-9p --alsologtostderr -v=1] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-168025 /tmp/TestFunctionalparallelMountCmdany-port1526128711/001:/mount-9p --alsologtostderr -v=1] stderr:
I0912 22:08:47.534438  106772 out.go:345] Setting OutFile to fd 1 ...
I0912 22:08:47.534795  106772 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:08:47.534832  106772 out.go:358] Setting ErrFile to fd 2...
I0912 22:08:47.534851  106772 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:08:47.535300  106772 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/bin
I0912 22:08:47.535944  106772 mustload.go:65] Loading cluster: functional-168025
I0912 22:08:47.536676  106772 config.go:182] Loaded profile config "functional-168025": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:08:47.537430  106772 cli_runner.go:164] Run: docker container inspect functional-168025 --format={{.State.Status}}
I0912 22:08:47.577236  106772 host.go:66] Checking if "functional-168025" exists ...
I0912 22:08:47.577730  106772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0912 22:08:47.856281  106772 info.go:266] docker info: {ID:cc2c2805-45ae-4725-9955-34f6536c4026 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:false NGoroutines:54 SystemTime:2024-09-12 22:08:47.833900722 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0912 22:08:47.856547  106772 cli_runner.go:164] Run: docker network inspect functional-168025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0912 22:08:47.895897  106772 out.go:201] 
W0912 22:08:47.897700  106772 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0912 22:08:47.899933  106772 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (14.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (16.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-168025 /tmp/TestFunctionalparallelMountCmdspecific-port3186354856/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (784.78052ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (392.305698ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (419.096361ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (390.124757ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (384.625383ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (375.517078ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (462.280031ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (387.968438ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:253: /mount-9p did not appear within 15.262268993s: exit status 1
functional_test_mount_test.go:220: "TestFunctional/parallel/MountCmd/specific-port" failed, getting debug info...
functional_test_mount_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (398.921281ms)

                                                
                                                
-- stdout --
	ls: cannot access '/mount-9p': No such file or directory
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:223: debugging command "out/minikube-linux-amd64 -p functional-168025 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "sudo umount -f /mount-9p": exit status 1 (417.596197ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: no mount point specified.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-168025 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-168025 /tmp/TestFunctionalparallelMountCmdspecific-port3186354856/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-amd64 mount -p functional-168025 /tmp/TestFunctionalparallelMountCmdspecific-port3186354856/001:/mount-9p --alsologtostderr -v=1 --port 46464] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-amd64 mount -p functional-168025 /tmp/TestFunctionalparallelMountCmdspecific-port3186354856/001:/mount-9p --alsologtostderr -v=1 --port 46464] stderr:
I0912 22:09:01.917891  107432 out.go:345] Setting OutFile to fd 1 ...
I0912 22:09:01.918543  107432 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:09:01.918609  107432 out.go:358] Setting ErrFile to fd 2...
I0912 22:09:01.918635  107432 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:09:01.919368  107432 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/bin
I0912 22:09:01.919883  107432 mustload.go:65] Loading cluster: functional-168025
I0912 22:09:01.921408  107432 config.go:182] Loaded profile config "functional-168025": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:09:01.924015  107432 cli_runner.go:164] Run: docker container inspect functional-168025 --format={{.State.Status}}
I0912 22:09:02.019249  107432 host.go:66] Checking if "functional-168025" exists ...
I0912 22:09:02.019827  107432 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0912 22:09:02.310347  107432 info.go:266] docker info: {ID:cc2c2805-45ae-4725-9955-34f6536c4026 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:false NGoroutines:54 SystemTime:2024-09-12 22:09:02.2531309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin n
ame=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0912 22:09:02.310613  107432 cli_runner.go:164] Run: docker network inspect functional-168025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0912 22:09:02.359356  107432 out.go:201] 
W0912 22:09:02.362101  107432 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0912 22:09:02.364723  107432 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/specific-port (16.19s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (13.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-168025 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2860855755/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-168025 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2860855755/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-168025 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2860855755/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T" /mount1: exit status 1 (1.138880804s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T" /mount1: exit status 1 (374.958817ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T" /mount1: exit status 1 (420.184973ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T" /mount1: exit status 1 (543.719187ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T" /mount1: exit status 1 (404.322076ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T" /mount1: exit status 1 (375.247019ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "findmnt -T" /mount1: exit status 1 (398.456989ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:342: mount was not ready in time: exit status 1
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-168025 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2860855755/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-168025 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2860855755/001:/mount1 --alsologtostderr -v=1] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-168025 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2860855755/001:/mount1 --alsologtostderr -v=1] stderr:
I0912 22:09:18.153808  108179 out.go:345] Setting OutFile to fd 1 ...
I0912 22:09:18.154165  108179 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:09:18.154184  108179 out.go:358] Setting ErrFile to fd 2...
I0912 22:09:18.154194  108179 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:09:18.154585  108179 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/bin
I0912 22:09:18.155042  108179 mustload.go:65] Loading cluster: functional-168025
I0912 22:09:18.155674  108179 config.go:182] Loaded profile config "functional-168025": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:09:18.156717  108179 cli_runner.go:164] Run: docker container inspect functional-168025 --format={{.State.Status}}
I0912 22:09:18.267682  108179 host.go:66] Checking if "functional-168025" exists ...
I0912 22:09:18.268534  108179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0912 22:09:18.800777  108179 info.go:266] docker info: {ID:cc2c2805-45ae-4725-9955-34f6536c4026 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:false NGoroutines:54 SystemTime:2024-09-12 22:09:18.645823503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0912 22:09:18.801035  108179 cli_runner.go:164] Run: docker network inspect functional-168025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0912 22:09:18.871660  108179 out.go:201] 
W0912 22:09:18.873761  108179 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0912 22:09:18.875354  108179 out.go:201] 
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-168025 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2860855755/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-168025 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2860855755/001:/mount2 --alsologtostderr -v=1] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-168025 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2860855755/001:/mount2 --alsologtostderr -v=1] stderr:
I0912 22:09:18.255783  108180 out.go:345] Setting OutFile to fd 1 ...
I0912 22:09:18.256196  108180 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:09:18.256216  108180 out.go:358] Setting ErrFile to fd 2...
I0912 22:09:18.256226  108180 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:09:18.256651  108180 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/bin
I0912 22:09:18.257213  108180 mustload.go:65] Loading cluster: functional-168025
I0912 22:09:18.258087  108180 config.go:182] Loaded profile config "functional-168025": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:09:18.259203  108180 cli_runner.go:164] Run: docker container inspect functional-168025 --format={{.State.Status}}
I0912 22:09:18.358372  108180 host.go:66] Checking if "functional-168025" exists ...
I0912 22:09:18.358903  108180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0912 22:09:18.799207  108180 info.go:266] docker info: {ID:cc2c2805-45ae-4725-9955-34f6536c4026 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:false NGoroutines:54 SystemTime:2024-09-12 22:09:18.645823503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0912 22:09:18.799530  108180 cli_runner.go:164] Run: docker network inspect functional-168025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0912 22:09:18.863519  108180 out.go:201] 
W0912 22:09:18.865949  108180 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0912 22:09:18.871773  108180 out.go:201] 
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-168025 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2860855755/001:/mount3 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-168025 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2860855755/001:/mount3 --alsologtostderr -v=1] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-168025 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2860855755/001:/mount3 --alsologtostderr -v=1] stderr:
I0912 22:09:18.203044  108181 out.go:345] Setting OutFile to fd 1 ...
I0912 22:09:18.210712  108181 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:09:18.210744  108181 out.go:358] Setting ErrFile to fd 2...
I0912 22:09:18.210756  108181 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:09:18.211437  108181 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/bin
I0912 22:09:18.211966  108181 mustload.go:65] Loading cluster: functional-168025
I0912 22:09:18.212695  108181 config.go:182] Loaded profile config "functional-168025": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:09:18.217556  108181 cli_runner.go:164] Run: docker container inspect functional-168025 --format={{.State.Status}}
I0912 22:09:18.322822  108181 host.go:66] Checking if "functional-168025" exists ...
I0912 22:09:18.323394  108181 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0912 22:09:18.800835  108181 info.go:266] docker info: {ID:cc2c2805-45ae-4725-9955-34f6536c4026 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:false NGoroutines:54 SystemTime:2024-09-12 22:09:18.645823503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0912 22:09:18.801041  108181 cli_runner.go:164] Run: docker network inspect functional-168025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0912 22:09:18.892730  108181 out.go:201] 
W0912 22:09:18.894567  108181 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0912 22:09:18.896179  108181 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/VerifyCleanup (13.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (83.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-168025 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx-svc   LoadBalancer   10.97.78.146   <pending>     80:30779/TCP   4m33s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (83.37s)

                                                
                                    

Test pass (96/108)

Order passed test Duration
3 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.16
4 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.16
5 TestAddons/Setup 262.26
9 TestAddons/serial/GCPAuth/Namespaces 0.25
12 TestAddons/parallel/Ingress 28.5
13 TestAddons/parallel/InspektorGadget 12.51
14 TestAddons/parallel/MetricsServer 8.23
15 TestAddons/parallel/HelmTiller 11.83
17 TestAddons/parallel/CSI 54.95
18 TestAddons/parallel/Headlamp 19.39
19 TestAddons/parallel/CloudSpanner 6.75
20 TestAddons/parallel/LocalPath 62.05
21 TestAddons/parallel/NvidiaDevicePlugin 6.65
22 TestAddons/parallel/Yakd 11.18
23 TestAddons/StoppedEnableDisable 12.24
26 TestFunctional/serial/CopySyncFile 0.11
27 TestFunctional/serial/StartWithProxy 91.7
28 TestFunctional/serial/AuditLog 0.05
29 TestFunctional/serial/SoftStart 38.28
30 TestFunctional/serial/KubeContext 0.1
31 TestFunctional/serial/KubectlGetPods 0.12
34 TestFunctional/serial/CacheCmd/cache/add_remote 2.86
35 TestFunctional/serial/CacheCmd/cache/add_local 1.67
36 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.1
37 TestFunctional/serial/CacheCmd/cache/list 0.11
38 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.46
39 TestFunctional/serial/CacheCmd/cache/cache_reload 2.1
40 TestFunctional/serial/CacheCmd/cache/delete 0.19
41 TestFunctional/serial/MinikubeKubectlCmd 1.18
42 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.2
43 TestFunctional/serial/ExtraConfig 48.04
44 TestFunctional/serial/ComponentHealth 0.28
45 TestFunctional/serial/LogsCmd 1.75
46 TestFunctional/serial/LogsFileCmd 1.88
47 TestFunctional/serial/InvalidService 4.66
49 TestFunctional/parallel/ConfigCmd 0.93
50 TestFunctional/parallel/DashboardCmd 17.76
51 TestFunctional/parallel/DryRun 0.7
52 TestFunctional/parallel/InternationalLanguage 0.41
53 TestFunctional/parallel/StatusCmd 1.62
57 TestFunctional/parallel/ServiceCmdConnect 13
58 TestFunctional/parallel/AddonsCmd 0.31
59 TestFunctional/parallel/PersistentVolumeClaim 30.44
61 TestFunctional/parallel/SSHCmd 1.15
62 TestFunctional/parallel/CpCmd 4.25
63 TestFunctional/parallel/MySQL 36.92
64 TestFunctional/parallel/FileSync 0.41
65 TestFunctional/parallel/CertSync 2.59
69 TestFunctional/parallel/NodeLabels 0.1
71 TestFunctional/parallel/NonActiveRuntimeDisabled 0.41
73 TestFunctional/parallel/License 0.85
75 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.06
76 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
78 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.87
80 TestFunctional/parallel/ServiceCmd/DeployApp 7.39
81 TestFunctional/parallel/ServiceCmd/List 0.67
82 TestFunctional/parallel/ServiceCmd/JSONOutput 0.65
83 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
84 TestFunctional/parallel/ServiceCmd/Format 0.57
85 TestFunctional/parallel/ServiceCmd/URL 0.67
86 TestFunctional/parallel/ProfileCmd/profile_not_create 0.72
87 TestFunctional/parallel/ProfileCmd/profile_list 0.57
88 TestFunctional/parallel/ProfileCmd/profile_json_output 0.58
92 TestFunctional/parallel/Version/short 0.09
93 TestFunctional/parallel/Version/components 1.66
94 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
95 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
96 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
97 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
98 TestFunctional/parallel/ImageCommands/ImageBuild 3.2
99 TestFunctional/parallel/ImageCommands/Setup 2.87
100 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.46
101 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.16
102 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.69
103 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
104 TestFunctional/parallel/ImageCommands/ImageRemove 0.66
105 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.92
106 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
107 TestFunctional/parallel/DockerEnv/bash 1.56
108 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.27
110 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.41
115 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
116 TestFunctional/delete_echo-server_images 0.07
117 TestFunctional/delete_my-image_image 0.04
118 TestFunctional/delete_minikube_cached_images 0.04
123 TestStartStop/group/cloud-shell/serial/FirstStart 81.96
124 TestStartStop/group/cloud-shell/serial/DeployApp 8.61
125 TestStartStop/group/cloud-shell/serial/EnableAddonWhileActive 2.53
126 TestStartStop/group/cloud-shell/serial/Stop 11.34
127 TestStartStop/group/cloud-shell/serial/EnableAddonAfterStop 0.33
128 TestStartStop/group/cloud-shell/serial/SecondStart 279.93
129 TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop 6.01
130 TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop 6.18
131 TestStartStop/group/cloud-shell/serial/VerifyKubernetesImages 0.4
132 TestStartStop/group/cloud-shell/serial/Pause 4.52
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.16s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-331995
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-331995: exit status 85 (159.802215ms)

                                                
                                                
-- stdout --
	* Profile "addons-331995" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-331995"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.16s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.16s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-331995
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-331995: exit status 85 (164.342251ms)

                                                
                                                
-- stdout --
	* Profile "addons-331995" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-331995"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.16s)

                                                
                                    
x
+
TestAddons/Setup (262.26s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-331995 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-331995 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (4m22.259060087s)
--- PASS: TestAddons/Setup (262.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-331995 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-331995 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (28.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-331995 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-331995 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-331995 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [404d5fa7-9b57-4f53-96a6-3a4f74c68d1a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [404d5fa7-9b57-4f53-96a6-3a4f74c68d1a] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.006004787s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-331995 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-331995 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:288: (dbg) Done: kubectl --context addons-331995 replace --force -f testdata/ingress-dns-example-v1.yaml: (1.025447768s)
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-331995 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-331995 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-331995 addons disable ingress-dns --alsologtostderr -v=1: (2.319591757s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-331995 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-331995 addons disable ingress --alsologtostderr -v=1: (12.134064193s)
--- PASS: TestAddons/parallel/Ingress (28.50s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.51s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-qxkpk" [f66e77ac-0168-4c48-9d46-ce276ef98484] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.025735598s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-331995
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-331995: (6.483450564s)
--- PASS: TestAddons/parallel/InspektorGadget (12.51s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (8.23s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 13.129354ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-qj8c7" [10575e3b-51e3-4a17-9911-8ed2245ed9c6] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.121575573s
addons_test.go:417: (dbg) Run:  kubectl --context addons-331995 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-331995 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:434: (dbg) Done: out/minikube-linux-amd64 -p addons-331995 addons disable metrics-server --alsologtostderr -v=1: (1.56531326s)
--- PASS: TestAddons/parallel/MetricsServer (8.23s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 8.992478ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-pxwc5" [c1f59ae3-6c11-42a9-b480-6fb541264acf] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.006645464s
addons_test.go:475: (dbg) Run:  kubectl --context addons-331995 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-331995 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.975065133s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-331995 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 41.815682ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-331995 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-331995 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6562d1b1-ce82-4caf-bda9-cf9eaf458b9a] Pending
helpers_test.go:344: "task-pv-pod" [6562d1b1-ce82-4caf-bda9-cf9eaf458b9a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6562d1b1-ce82-4caf-bda9-cf9eaf458b9a] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.006472496s
addons_test.go:590: (dbg) Run:  kubectl --context addons-331995 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-331995 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-331995 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-331995 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-331995 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-331995 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-331995 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a2b4f1de-8ce9-4fa5-91eb-c0a8ef3795df] Pending
helpers_test.go:344: "task-pv-pod-restore" [a2b4f1de-8ce9-4fa5-91eb-c0a8ef3795df] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a2b4f1de-8ce9-4fa5-91eb-c0a8ef3795df] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005356468s
addons_test.go:632: (dbg) Run:  kubectl --context addons-331995 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-331995 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-331995 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-331995 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-331995 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.558348583s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-331995 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-amd64 -p addons-331995 addons disable volumesnapshots --alsologtostderr -v=1: (1.239644759s)
--- PASS: TestAddons/parallel/CSI (54.95s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-331995 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-331995 --alsologtostderr -v=1: (1.422324993s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-2zf5g" [0ca8d205-a605-49fb-a3a4-1ed17887e340] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-2zf5g" [0ca8d205-a605-49fb-a3a4-1ed17887e340] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-2zf5g" [0ca8d205-a605-49fb-a3a4-1ed17887e340] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.005743868s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-331995 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-331995 addons disable headlamp --alsologtostderr -v=1: (5.96030025s)
--- PASS: TestAddons/parallel/Headlamp (19.39s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.75s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-2cg5r" [cec9a75e-aa5c-478c-9e4f-3504905aa987] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004190148s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-331995
--- PASS: TestAddons/parallel/CloudSpanner (6.75s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (62.05s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-331995 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-331995 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331995 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [92874af0-1e0a-4b64-a8fe-ccae9203f714] Pending
helpers_test.go:344: "test-local-path" [92874af0-1e0a-4b64-a8fe-ccae9203f714] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [92874af0-1e0a-4b64-a8fe-ccae9203f714] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [92874af0-1e0a-4b64-a8fe-ccae9203f714] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.005493219s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-331995 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-331995 ssh "cat /opt/local-path-provisioner/pvc-5e56f996-c0f6-4c32-a045-9b3dddf180de_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-331995 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-331995 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-331995 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-331995 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.432232713s)
--- PASS: TestAddons/parallel/LocalPath (62.05s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4sqcf" [2bb7e4c9-91fb-4914-ab76-7ffc5517e40d] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005007468s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-331995
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.65s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-xqz4j" [8c5240c9-73f4-4786-ab2a-d6b310625b35] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.007621819s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-331995 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-331995 addons disable yakd --alsologtostderr -v=1: (6.141774438s)
--- PASS: TestAddons/parallel/Yakd (11.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-331995
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-331995: (11.769037842s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-331995
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-331995
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-331995
--- PASS: TestAddons/StoppedEnableDisable (12.24s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/files/etc/test/nested/copy/69920/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.11s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (91.7s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-168025 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
E0912 22:05:11.627110   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:05:11.649864   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:05:11.661746   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:05:11.683265   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:05:11.724986   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:05:11.806495   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:05:11.992664   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:05:12.314569   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:05:12.956469   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:05:14.238634   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:05:16.800179   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:05:21.922109   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:05:32.164377   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:05:52.646230   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-168025 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m31.525340498s)
--- PASS: TestFunctional/serial/StartWithProxy (91.70s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.05s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.28s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-168025 --alsologtostderr -v=8
E0912 22:06:33.607652   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-168025 --alsologtostderr -v=8: (38.050517776s)
functional_test.go:663: soft start took 38.280467312s for "functional-168025" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.28s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.10s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-168025 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-168025 cache add registry.k8s.io/pause:3.3: (1.012342147s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-168025 /tmp/TestFunctionalserialCacheCmdcacheadd_local886456987/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 cache add minikube-local-cache-test:functional-168025
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 cache delete minikube-local-cache-test:functional-168025
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-168025
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (438.818208ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.19s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 kubectl -- --context functional-168025 get pods
functional_test.go:716: (dbg) Done: out/minikube-linux-amd64 -p functional-168025 kubectl -- --context functional-168025 get pods: (1.176953075s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-168025 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.20s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (48.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-168025 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-168025 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (48.042537937s)
functional_test.go:761: restart took 48.042715831s for "functional-168025" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (48.04s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-168025 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.28s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-168025 logs: (1.749883341s)
--- PASS: TestFunctional/serial/LogsCmd (1.75s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 logs --file /tmp/TestFunctionalserialLogsFileCmd2023813705/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-168025 logs --file /tmp/TestFunctionalserialLogsFileCmd2023813705/001/logs.txt: (1.873802526s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.88s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.66s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-168025 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-168025
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-168025: exit status 115 (540.12419ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31973 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_5b55102efd84289233ffc613c137836b410b4e4d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-168025 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.66s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 config get cpus: exit status 14 (175.09368ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 config get cpus: exit status 14 (164.029667ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-168025 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-168025 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 109294: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.76s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-168025 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-168025 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (292.384227ms)

                                                
                                                
-- stdout --
	* [functional-168025] minikube v1.34.0 on Ubuntu 22.04 (amd64)
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19616-63719/kubeconfig
	  - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19616-63719/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_WANTUPDATENOTIFICATION=false
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:09:33.149801  109042 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:09:33.150094  109042 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:09:33.150143  109042 out.go:358] Setting ErrFile to fd 2...
	I0912 22:09:33.150164  109042 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:09:33.150452  109042 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/bin
	I0912 22:09:33.151046  109042 out.go:352] Setting JSON to false
	I0912 22:09:33.152423  109042 start.go:129] hostinfo: {"hostname":"cs-905301410258-default","uptime":4074,"bootTime":1726174899,"procs":91,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.1.100+","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"guest","hostId":"88b15d6b-fddc-40bb-b1ad-a90cb2566e38"}
	I0912 22:09:33.152520  109042 start.go:139] virtualization:  guest
	I0912 22:09:33.156435  109042 out.go:177] * [functional-168025] minikube v1.34.0 on Ubuntu 22.04 (amd64)
	I0912 22:09:33.159955  109042 notify.go:220] Checking for updates...
	I0912 22:09:33.160121  109042 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:09:33.163085  109042 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:09:33.167155  109042 out.go:177]   - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19616-63719/kubeconfig
	I0912 22:09:33.170311  109042 out.go:177]   - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19616-63719/.minikube
	I0912 22:09:33.173518  109042 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 22:09:33.177767  109042 out.go:177]   - MINIKUBE_WANTUPDATENOTIFICATION=false
	I0912 22:09:33.181990  109042 config.go:182] Loaded profile config "functional-168025": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 22:09:33.183844  109042 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 22:09:33.234809  109042 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0912 22:09:33.235112  109042 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:09:33.339111  109042 info.go:266] docker info: {ID:cc2c2805-45ae-4725-9955-34f6536c4026 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:false NGoroutines:54 SystemTime:2024-09-12 22:09:33.320264953 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 22:09:33.339306  109042 docker.go:318] overlay module found
	I0912 22:09:33.344020  109042 out.go:177] * Using the docker driver based on existing profile
	I0912 22:09:33.346931  109042 start.go:297] selected driver: docker
	I0912 22:09:33.346962  109042 start.go:901] validating driver "docker" against &{Name:functional-168025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-168025 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:09:33.347211  109042 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:09:33.350385  109042 out.go:201] 
	W0912 22:09:33.353323  109042 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0912 22:09:33.356179  109042 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-168025 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-168025 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-168025 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (405.675755ms)

                                                
                                                
-- stdout --
	* [functional-168025] minikube v1.34.0 sur Ubuntu 22.04 (amd64)
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19616-63719/kubeconfig
	  - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19616-63719/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_WANTUPDATENOTIFICATION=false
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:09:32.846593  108998 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:09:32.846895  108998 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:09:32.846942  108998 out.go:358] Setting ErrFile to fd 2...
	I0912 22:09:32.846959  108998 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:09:32.847612  108998 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/bin
	I0912 22:09:32.848500  108998 out.go:352] Setting JSON to false
	I0912 22:09:32.849699  108998 start.go:129] hostinfo: {"hostname":"cs-905301410258-default","uptime":4074,"bootTime":1726174899,"procs":91,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.1.100+","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"guest","hostId":"88b15d6b-fddc-40bb-b1ad-a90cb2566e38"}
	I0912 22:09:32.849794  108998 start.go:139] virtualization:  guest
	I0912 22:09:32.854842  108998 out.go:177] * [functional-168025] minikube v1.34.0 sur Ubuntu 22.04 (amd64)
	I0912 22:09:32.858433  108998 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:09:32.858459  108998 notify.go:220] Checking for updates...
	I0912 22:09:32.867788  108998 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:09:32.870744  108998 out.go:177]   - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19616-63719/kubeconfig
	I0912 22:09:32.880498  108998 out.go:177]   - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19616-63719/.minikube
	I0912 22:09:32.884432  108998 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 22:09:32.887985  108998 out.go:177]   - MINIKUBE_WANTUPDATENOTIFICATION=false
	I0912 22:09:32.893537  108998 config.go:182] Loaded profile config "functional-168025": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 22:09:32.894660  108998 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 22:09:32.937999  108998 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0912 22:09:32.938240  108998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:09:33.044816  108998 info.go:266] docker info: {ID:cc2c2805-45ae-4725-9955-34f6536c4026 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:false NGoroutines:54 SystemTime:2024-09-12 22:09:33.027640921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 22:09:33.045016  108998 docker.go:318] overlay module found
	I0912 22:09:33.049156  108998 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0912 22:09:33.052781  108998 start.go:297] selected driver: docker
	I0912 22:09:33.052813  108998 start.go:901] validating driver "docker" against &{Name:functional-168025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-168025 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:09:33.052988  108998 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:09:33.056746  108998 out.go:201] 
	W0912 22:09:33.059959  108998 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0912 22:09:33.062692  108998 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-168025 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-168025 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-mbz8r" [78e2a58f-e71e-4bff-a093-626c87da45fc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-mbz8r" [78e2a58f-e71e-4bff-a093-626c87da45fc] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.006543681s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32542
functional_test.go:1675: http://192.168.49.2:32542: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-mbz8r

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32542
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.00s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f6e08d9f-1347-4bbc-b962-30c93b4d4428] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007581635s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-168025 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-168025 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-168025 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-168025 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [14e06b59-c0df-4c9e-93fb-659891820f18] Pending
helpers_test.go:344: "sp-pod" [14e06b59-c0df-4c9e-93fb-659891820f18] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [14e06b59-c0df-4c9e-93fb-659891820f18] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.005556788s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-168025 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-168025 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-168025 delete -f testdata/storage-provisioner/pod.yaml: (1.085087276s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-168025 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [24dcad50-f5c6-4f88-8b4e-1ef6ede2c522] Pending
helpers_test.go:344: "sp-pod" [24dcad50-f5c6-4f88-8b4e-1ef6ede2c522] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [24dcad50-f5c6-4f88-8b4e-1ef6ede2c522] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.006060341s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-168025 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.44s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (4.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh -n functional-168025 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 cp functional-168025:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2780848473/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh -n functional-168025 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh -n functional-168025 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-linux-amd64 -p functional-168025 ssh -n functional-168025 "sudo cat /tmp/does/not/exist/cp-test.txt": (1.006410493s)
--- PASS: TestFunctional/parallel/CpCmd (4.25s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (36.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-168025 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-m6nzx" [e6c486f6-695f-4000-962a-2760a37d31f6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-m6nzx" [e6c486f6-695f-4000-962a-2760a37d31f6] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.005741542s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-168025 exec mysql-6cdb49bbb-m6nzx -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-168025 exec mysql-6cdb49bbb-m6nzx -- mysql -ppassword -e "show databases;": exit status 1 (350.557334ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0912 22:10:39.372353   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1807: (dbg) Run:  kubectl --context functional-168025 exec mysql-6cdb49bbb-m6nzx -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-168025 exec mysql-6cdb49bbb-m6nzx -- mysql -ppassword -e "show databases;": exit status 1 (296.992661ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-168025 exec mysql-6cdb49bbb-m6nzx -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-168025 exec mysql-6cdb49bbb-m6nzx -- mysql -ppassword -e "show databases;": exit status 1 (406.14508ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-168025 exec mysql-6cdb49bbb-m6nzx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (36.92s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/69920/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "sudo cat /etc/test/nested/copy/69920/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/69920.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "sudo cat /etc/ssl/certs/69920.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/69920.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "sudo cat /usr/share/ca-certificates/69920.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/699202.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "sudo cat /etc/ssl/certs/699202.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/699202.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "sudo cat /usr/share/ca-certificates/699202.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-168025 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh "sudo systemctl is-active crio": exit status 1 (414.208019ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-168025 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-168025 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-168025 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 104565: os: process already finished
helpers_test.go:502: unable to terminate pid 104400: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-168025 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-168025 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-168025 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f67e9de8-fe0e-4b44-8956-7495f4102a6b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [f67e9de8-fe0e-4b44-8956-7495f4102a6b] Running
E0912 22:07:55.529916   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.014423275s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-168025 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-168025 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-sh2bw" [37713987-cee7-4d45-9457-b67ae85e4134] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-sh2bw" [37713987-cee7-4d45-9457-b67ae85e4134] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.005762918s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 service list -o json
functional_test.go:1494: Took "650.727377ms" to run "out/minikube-linux-amd64 -p functional-168025 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31815
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31815
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "482.583749ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "91.51623ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "497.47055ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "84.905392ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-amd64 -p functional-168025 version -o=json --components: (1.659347856s)
--- PASS: TestFunctional/parallel/Version/components (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-168025 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-168025
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-168025
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-168025 image ls --format short --alsologtostderr:
I0912 22:10:47.084720  111839 out.go:345] Setting OutFile to fd 1 ...
I0912 22:10:47.084926  111839 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:10:47.084943  111839 out.go:358] Setting ErrFile to fd 2...
I0912 22:10:47.084954  111839 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:10:47.085574  111839 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/bin
I0912 22:10:47.087338  111839 config.go:182] Loaded profile config "functional-168025": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:10:47.087643  111839 config.go:182] Loaded profile config "functional-168025": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:10:47.088752  111839 cli_runner.go:164] Run: docker container inspect functional-168025 --format={{.State.Status}}
I0912 22:10:47.116399  111839 ssh_runner.go:195] Run: systemctl --version
I0912 22:10:47.116589  111839 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-168025
I0912 22:10:47.146179  111839 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/functional-168025/id_rsa Username:docker}
I0912 22:10:47.242368  111839 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-168025 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| localhost/my-image                          | functional-168025 | 9595524e52bc6 | 1.24MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/kicbase/echo-server               | functional-168025 | 9056ab77afb8e | 4.94MB |
| docker.io/library/minikube-local-cache-test | functional-168025 | 540a77f781f71 | 30B    |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-168025 image ls --format table --alsologtostderr:
I0912 22:10:51.207087  112177 out.go:345] Setting OutFile to fd 1 ...
I0912 22:10:51.207338  112177 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:10:51.207354  112177 out.go:358] Setting ErrFile to fd 2...
I0912 22:10:51.207365  112177 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:10:51.207690  112177 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/bin
I0912 22:10:51.208524  112177 config.go:182] Loaded profile config "functional-168025": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:10:51.208798  112177 config.go:182] Loaded profile config "functional-168025": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:10:51.209503  112177 cli_runner.go:164] Run: docker container inspect functional-168025 --format={{.State.Status}}
I0912 22:10:51.239571  112177 ssh_runner.go:195] Run: systemctl --version
I0912 22:10:51.239848  112177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-168025
I0912 22:10:51.272932  112177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/functional-168025/id_rsa Username:docker}
I0912 22:10:51.370456  112177 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-168025 image ls --format json --alsologtostderr:
[{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-168025"],"size":"4940000"},{"id":"9595524e52bc6d340f2ba46213589878b2c8e155019508aba801189744d17560","repoDigests":[],"repoTags":["localhost/my-image:functional-168025"],"size":"1240000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"115053965e86b2df4d78af78d79
51b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"540a77f781f717f4dd822eb0bf35513173844c976aea6a5dd55a266398f24ced","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-168025"],"size":"30"},{"id":"9aa1fad941575eed91ab13d44f3e4c
b5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"siz
e":"736000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-168025 image ls --format json --alsologtostderr:
I0912 22:10:50.892385  112145 out.go:345] Setting OutFile to fd 1 ...
I0912 22:10:50.892651  112145 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:10:50.892668  112145 out.go:358] Setting ErrFile to fd 2...
I0912 22:10:50.892680  112145 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:10:50.892973  112145 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/bin
I0912 22:10:50.893752  112145 config.go:182] Loaded profile config "functional-168025": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:10:50.893942  112145 config.go:182] Loaded profile config "functional-168025": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:10:50.894560  112145 cli_runner.go:164] Run: docker container inspect functional-168025 --format={{.State.Status}}
I0912 22:10:50.927728  112145 ssh_runner.go:195] Run: systemctl --version
I0912 22:10:50.927885  112145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-168025
I0912 22:10:50.958884  112145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/functional-168025/id_rsa Username:docker}
I0912 22:10:51.058674  112145 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-168025 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-168025
size: "4940000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 540a77f781f717f4dd822eb0bf35513173844c976aea6a5dd55a266398f24ced
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-168025
size: "30"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-168025 image ls --format yaml --alsologtostderr:
I0912 22:10:47.381247  111872 out.go:345] Setting OutFile to fd 1 ...
I0912 22:10:47.381504  111872 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:10:47.381525  111872 out.go:358] Setting ErrFile to fd 2...
I0912 22:10:47.381534  111872 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:10:47.381910  111872 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/bin
I0912 22:10:47.382718  111872 config.go:182] Loaded profile config "functional-168025": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:10:47.383003  111872 config.go:182] Loaded profile config "functional-168025": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:10:47.383842  111872 cli_runner.go:164] Run: docker container inspect functional-168025 --format={{.State.Status}}
I0912 22:10:47.411599  111872 ssh_runner.go:195] Run: systemctl --version
I0912 22:10:47.411723  111872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-168025
I0912 22:10:47.451539  111872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/functional-168025/id_rsa Username:docker}
I0912 22:10:47.549931  111872 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168025 ssh pgrep buildkitd: exit status 1 (421.748082ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 image build -t localhost/my-image:functional-168025 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-168025 image build -t localhost/my-image:functional-168025 testdata/build --alsologtostderr: (2.477544976s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-168025 image build -t localhost/my-image:functional-168025 testdata/build --alsologtostderr:
I0912 22:10:48.103763  111973 out.go:345] Setting OutFile to fd 1 ...
I0912 22:10:48.105560  111973 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:10:48.105590  111973 out.go:358] Setting ErrFile to fd 2...
I0912 22:10:48.105599  111973 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:10:48.106153  111973 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/bin
I0912 22:10:48.107028  111973 config.go:182] Loaded profile config "functional-168025": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:10:48.141802  111973 config.go:182] Loaded profile config "functional-168025": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:10:48.143005  111973 cli_runner.go:164] Run: docker container inspect functional-168025 --format={{.State.Status}}
I0912 22:10:48.170812  111973 ssh_runner.go:195] Run: systemctl --version
I0912 22:10:48.170942  111973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-168025
I0912 22:10:48.201121  111973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19616-63719/.minikube/machines/functional-168025/id_rsa Username:docker}
I0912 22:10:48.296943  111973 build_images.go:161] Building image from path: /tmp/build.1509974651.tar
I0912 22:10:48.297158  111973 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0912 22:10:48.313196  111973 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1509974651.tar
I0912 22:10:48.319690  111973 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1509974651.tar: stat -c "%s %y" /var/lib/minikube/build/build.1509974651.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1509974651.tar': No such file or directory
I0912 22:10:48.319739  111973 ssh_runner.go:362] scp /tmp/build.1509974651.tar --> /var/lib/minikube/build/build.1509974651.tar (3072 bytes)
I0912 22:10:48.364753  111973 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1509974651
I0912 22:10:48.381577  111973 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1509974651 -xf /var/lib/minikube/build/build.1509974651.tar
I0912 22:10:48.398631  111973 docker.go:360] Building image: /var/lib/minikube/build/build.1509974651
I0912 22:10:48.398886  111973 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-168025 /var/lib/minikube/build/build.1509974651
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:9595524e52bc6d340f2ba46213589878b2c8e155019508aba801189744d17560 done
#8 naming to localhost/my-image:functional-168025 done
#8 DONE 0.1s
I0912 22:10:50.451038  111973 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-168025 /var/lib/minikube/build/build.1509974651: (2.052110398s)
I0912 22:10:50.451372  111973 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1509974651
I0912 22:10:50.470473  111973 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1509974651.tar
I0912 22:10:50.490400  111973 build_images.go:217] Built localhost/my-image:functional-168025 from /tmp/build.1509974651.tar
I0912 22:10:50.490457  111973 build_images.go:133] succeeded building to: functional-168025
I0912 22:10:50.490465  111973 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.837564995s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-168025
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 image load --daemon kicbase/echo-server:functional-168025 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-168025 image load --daemon kicbase/echo-server:functional-168025 --alsologtostderr: (1.15880744s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 image load --daemon kicbase/echo-server:functional-168025 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Done: docker pull kicbase/echo-server:latest: (1.231846464s)
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-168025
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 image load --daemon kicbase/echo-server:functional-168025 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-168025 image load --daemon kicbase/echo-server:functional-168025 --alsologtostderr: (1.124124941s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 image save kicbase/echo-server:functional-168025 /home/g528047478195_compute/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 image rm kicbase/echo-server:functional-168025 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 image load /home/g528047478195_compute/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-168025
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 image save --daemon kicbase/echo-server:functional-168025 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-168025
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-168025 docker-env) && out/minikube-linux-amd64 status -p functional-168025"
functional_test.go:499: (dbg) Done: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-168025 docker-env) && out/minikube-linux-amd64 status -p functional-168025": (1.030526757s)
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-168025 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-168025 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-168025 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 112750: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-168025
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-168025
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-168025
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/FirstStart (81.96s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p cloud-shell-038783 --memory=2200 --alsologtostderr --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0912 22:12:50.152918   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/functional-168025/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:12:50.159436   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/functional-168025/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:12:50.170919   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/functional-168025/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:12:50.192388   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/functional-168025/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:12:50.233828   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/functional-168025/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:12:50.315290   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/functional-168025/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:12:50.476854   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/functional-168025/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:12:50.798300   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/functional-168025/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:12:51.440624   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/functional-168025/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:12:52.722237   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/functional-168025/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:12:55.285253   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/functional-168025/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:13:00.406731   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/functional-168025/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:13:10.648228   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/functional-168025/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:13:31.129767   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/functional-168025/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p cloud-shell-038783 --memory=2200 --alsologtostderr --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m21.95658211s)
--- PASS: TestStartStop/group/cloud-shell/serial/FirstStart (81.96s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/DeployApp (8.61s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context cloud-shell-038783 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/cloud-shell/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fcdf81f5-6bc4-4486-905d-38be09304a44] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fcdf81f5-6bc4-4486-905d-38be09304a44] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/cloud-shell/serial/DeployApp: integration-test=busybox healthy within 8.009179402s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context cloud-shell-038783 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/cloud-shell/serial/DeployApp (8.61s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/EnableAddonWhileActive (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p cloud-shell-038783 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p cloud-shell-038783 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.379030905s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context cloud-shell-038783 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/cloud-shell/serial/EnableAddonWhileActive (2.53s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/Stop (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p cloud-shell-038783 --alsologtostderr -v=3
E0912 22:14:12.092108   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/functional-168025/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p cloud-shell-038783 --alsologtostderr -v=3: (11.339275145s)
--- PASS: TestStartStop/group/cloud-shell/serial/Stop (11.34s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p cloud-shell-038783 -n cloud-shell-038783
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p cloud-shell-038783 -n cloud-shell-038783: exit status 7 (143.317851ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p cloud-shell-038783 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/cloud-shell/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/SecondStart (279.93s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p cloud-shell-038783 --memory=2200 --alsologtostderr --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0912 22:15:11.513713   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/addons-331995/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:15:34.015289   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/functional-168025/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:17:50.153392   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/functional-168025/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:18:17.857901   69920 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19616-63719/.minikube/profiles/functional-168025/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p cloud-shell-038783 --memory=2200 --alsologtostderr --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m39.302311615s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p cloud-shell-038783 -n cloud-shell-038783
--- PASS: TestStartStop/group/cloud-shell/serial/SecondStart (279.93s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-sb222" [0f92b716-759f-48ad-b93c-0ff3886c7dfa] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005636057s
--- PASS: TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop (6.18s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-sb222" [0f92b716-759f-48ad-b93c-0ff3886c7dfa] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005422546s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context cloud-shell-038783 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop (6.18s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p cloud-shell-038783 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/cloud-shell/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/Pause (4.52s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p cloud-shell-038783 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p cloud-shell-038783 --alsologtostderr -v=1: (1.01384535s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p cloud-shell-038783 -n cloud-shell-038783
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p cloud-shell-038783 -n cloud-shell-038783: exit status 2 (473.694926ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p cloud-shell-038783 -n cloud-shell-038783
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p cloud-shell-038783 -n cloud-shell-038783: exit status 2 (518.253179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p cloud-shell-038783 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p cloud-shell-038783 -n cloud-shell-038783
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p cloud-shell-038783 -n cloud-shell-038783
--- PASS: TestStartStop/group/cloud-shell/serial/Pause (4.52s)

                                                
                                    

Test skip (5/108)

x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
Copied to clipboard