Test Report: Docker_Cloud_Shell 19530

                    
                      6d579fb1420e6d4e07520b8ad7db429a8522bbcd:2024-08-29:35998
                    
                

Test fail (6/108)

x
+
TestAddons/parallel/Registry (76.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 8.073414ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-fkt2b" [a6d7d0ad-2e5b-410e-b7d4-b63cbe093d11] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009059743s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7rgrn" [b3ffbbdb-4b08-4fdc-8900-6cf736b4468f] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006660314s
addons_test.go:342: (dbg) Run:  kubectl --context addons-444829 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-444829 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-444829 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.182117912s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-444829 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-444829 ip
2024/08/29 19:10:38 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-444829 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-444829
helpers_test.go:235: (dbg) docker inspect addons-444829:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9fb446b34517410c72e07127f6b9bd481076815dea3d69706a585d3244e811af",
	        "Created": "2024-08-29T18:56:57.883043469Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 135188,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-29T18:56:58.100802941Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cf9874f1e25d62abde3fdda0022141a8ec82ded75077d073b80dc8f90194cf19",
	        "ResolvConfPath": "/var/lib/docker/containers/9fb446b34517410c72e07127f6b9bd481076815dea3d69706a585d3244e811af/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9fb446b34517410c72e07127f6b9bd481076815dea3d69706a585d3244e811af/hostname",
	        "HostsPath": "/var/lib/docker/containers/9fb446b34517410c72e07127f6b9bd481076815dea3d69706a585d3244e811af/hosts",
	        "LogPath": "/var/lib/docker/containers/9fb446b34517410c72e07127f6b9bd481076815dea3d69706a585d3244e811af/9fb446b34517410c72e07127f6b9bd481076815dea3d69706a585d3244e811af-json.log",
	        "Name": "/addons-444829",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-444829:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-444829",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f9ac79eb5863f8835b6d43b8bc9f7e948732528b67517bf6fe5ba1fb94bbb272-init/diff:/var/lib/docker/overlay2/10c6def4170da8ec3c2d7815c13325176699708f367dc9adb5c0fe197ec383e0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f9ac79eb5863f8835b6d43b8bc9f7e948732528b67517bf6fe5ba1fb94bbb272/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f9ac79eb5863f8835b6d43b8bc9f7e948732528b67517bf6fe5ba1fb94bbb272/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f9ac79eb5863f8835b6d43b8bc9f7e948732528b67517bf6fe5ba1fb94bbb272/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-444829",
	                "Source": "/var/lib/docker/volumes/addons-444829/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-444829",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-444829",
	                "name.minikube.sigs.k8s.io": "addons-444829",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "661402e183fe3f3293e6bf841b4160ad0d6a7dc3b58d24d7a5a4e6c08a3cce43",
	            "SandboxKey": "/var/run/docker/netns/661402e183fe",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32808"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32809"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32812"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32810"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32811"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-444829": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "5a035feeb1474dae44f2678a2eddb3f53180f6081047cc9344468c6b9c12f40c",
	                    "EndpointID": "8d814ea42c69f7d15aee283ff743dbd210c361e422eacce1d98214979c036670",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-444829",
	                        "9fb446b34517"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-444829 -n addons-444829
helpers_test.go:239: (dbg) Done: out/minikube-linux-amd64 status --format={{.Host}} -p addons-444829 -n addons-444829: (1.056873257s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-444829 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-444829 logs -n 25: (2.104554043s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|---------------|-----------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |    Profile    |         User          | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|---------------|-----------------------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p                  | addons-444829 | g528047478195_compute | v1.33.1 | 29 Aug 24 18:56 UTC |                     |
	|         | addons-444829                        |               |                       |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-444829 | g528047478195_compute | v1.33.1 | 29 Aug 24 18:56 UTC |                     |
	|         | addons-444829                        |               |                       |         |                     |                     |
	| start   | -p addons-444829 --wait=true         | addons-444829 | g528047478195_compute | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 19:00 UTC |
	|         | --memory=4000 --alsologtostderr      |               |                       |         |                     |                     |
	|         | --addons=registry                    |               |                       |         |                     |                     |
	|         | --addons=metrics-server              |               |                       |         |                     |                     |
	|         | --addons=volumesnapshots             |               |                       |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |               |                       |         |                     |                     |
	|         | --addons=gcp-auth                    |               |                       |         |                     |                     |
	|         | --addons=cloud-spanner               |               |                       |         |                     |                     |
	|         | --addons=inspektor-gadget            |               |                       |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |               |                       |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |               |                       |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |               |                       |         |                     |                     |
	|         | --driver=docker                      |               |                       |         |                     |                     |
	|         | --container-runtime=docker           |               |                       |         |                     |                     |
	|         | --addons=ingress                     |               |                       |         |                     |                     |
	|         | --addons=ingress-dns                 |               |                       |         |                     |                     |
	|         | --addons=helm-tiller                 |               |                       |         |                     |                     |
	| addons  | addons-444829 addons disable         | addons-444829 | g528047478195_compute | v1.33.1 | 29 Aug 24 19:01 UTC | 29 Aug 24 19:01 UTC |
	|         | volcano --alsologtostderr -v=1       |               |                       |         |                     |                     |
	| addons  | addons-444829 addons                 | addons-444829 | g528047478195_compute | v1.33.1 | 29 Aug 24 19:10 UTC | 29 Aug 24 19:10 UTC |
	|         | disable csi-hostpath-driver          |               |                       |         |                     |                     |
	|         | --alsologtostderr -v=1               |               |                       |         |                     |                     |
	| addons  | addons-444829 addons                 | addons-444829 | g528047478195_compute | v1.33.1 | 29 Aug 24 19:10 UTC | 29 Aug 24 19:10 UTC |
	|         | disable volumesnapshots              |               |                       |         |                     |                     |
	|         | --alsologtostderr -v=1               |               |                       |         |                     |                     |
	| addons  | addons-444829 addons disable         | addons-444829 | g528047478195_compute | v1.33.1 | 29 Aug 24 19:10 UTC | 29 Aug 24 19:10 UTC |
	|         | helm-tiller --alsologtostderr        |               |                       |         |                     |                     |
	|         | -v=1                                 |               |                       |         |                     |                     |
	| addons  | addons-444829 addons                 | addons-444829 | g528047478195_compute | v1.33.1 | 29 Aug 24 19:10 UTC | 29 Aug 24 19:10 UTC |
	|         | disable metrics-server               |               |                       |         |                     |                     |
	|         | --alsologtostderr -v=1               |               |                       |         |                     |                     |
	| ip      | addons-444829 ip                     | addons-444829 | g528047478195_compute | v1.33.1 | 29 Aug 24 19:10 UTC | 29 Aug 24 19:10 UTC |
	| addons  | addons-444829 addons disable         | addons-444829 | g528047478195_compute | v1.33.1 | 29 Aug 24 19:10 UTC | 29 Aug 24 19:10 UTC |
	|         | registry --alsologtostderr           |               |                       |         |                     |                     |
	|         | -v=1                                 |               |                       |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-444829 | g528047478195_compute | v1.33.1 | 29 Aug 24 19:10 UTC |                     |
	|         | addons-444829                        |               |                       |         |                     |                     |
	|---------|--------------------------------------|---------------|-----------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:56:07
	Running on machine: cs-905301410258-default
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:56:07.679433  134705 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:56:07.679626  134705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:56:07.679641  134705 out.go:358] Setting ErrFile to fd 2...
	I0829 18:56:07.679651  134705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:56:07.679904  134705 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/bin
	W0829 18:56:07.680210  134705 root.go:314] Error reading config file at /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/config/config.json: open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/config/config.json: no such file or directory
	I0829 18:56:07.680791  134705 out.go:352] Setting JSON to false
	I0829 18:56:07.682417  134705 start.go:129] hostinfo: {"hostname":"cs-905301410258-default","uptime":9605,"bootTime":1724948162,"procs":20,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.1.100+","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"guest","hostId":"88b15d6b-fddc-40bb-b1ad-a90cb2566e38"}
	I0829 18:56:07.682501  134705 start.go:139] virtualization:  guest
	I0829 18:56:07.687141  134705 out.go:177] * [addons-444829] minikube v1.33.1 on Ubuntu 22.04 (amd64)
	W0829 18:56:07.690437  134705 preload.go:293] Failed to list preload files: open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/cache/preloaded-tarball: no such file or directory
	I0829 18:56:07.690587  134705 notify.go:220] Checking for updates...
	I0829 18:56:07.694194  134705 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:56:07.698192  134705 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:56:07.701932  134705 out.go:177]   - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19530-128633/kubeconfig
	I0829 18:56:07.713239  134705 out.go:177]   - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19530-128633/.minikube
	I0829 18:56:07.716994  134705 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:56:07.720827  134705 out.go:177]   - MINIKUBE_WANTUPDATENOTIFICATION=false
	I0829 18:56:07.724252  134705 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:56:07.765351  134705 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0829 18:56:07.765589  134705 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:56:07.863258  134705 info.go:266] docker info: {ID:ed424db3-1cee-48f2-94d7-cc1f826da0cb Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:false NGoroutines:55 SystemTime:2024-08-29 18:56:07.846272503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0829 18:56:07.863474  134705 docker.go:307] overlay module found
	I0829 18:56:07.867453  134705 out.go:177] * Using the docker driver based on user configuration
	I0829 18:56:07.870513  134705 start.go:297] selected driver: docker
	I0829 18:56:07.870542  134705 start.go:901] validating driver "docker" against <nil>
	I0829 18:56:07.870563  134705 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:56:07.871374  134705 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:56:07.974601  134705 info.go:266] docker info: {ID:ed424db3-1cee-48f2-94d7-cc1f826da0cb Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:false NGoroutines:55 SystemTime:2024-08-29 18:56:07.958522407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0829 18:56:07.974850  134705 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:56:07.975744  134705 start_flags.go:421] setting extra-config: kubelet.cgroups-per-qos=false
	I0829 18:56:07.975770  134705 start_flags.go:421] setting extra-config: kubelet.enforce-node-allocatable=""
	I0829 18:56:07.975841  134705 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:56:07.978927  134705 out.go:177] * Using Docker driver with root privileges
	I0829 18:56:07.981697  134705 cni.go:84] Creating CNI manager for ""
	I0829 18:56:07.981738  134705 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 18:56:07.981758  134705 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 18:56:07.981867  134705 start.go:340] cluster config:
	{Name:addons-444829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-444829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:56:07.985443  134705 out.go:177] * Starting "addons-444829" primary control-plane node in "addons-444829" cluster
	I0829 18:56:07.988144  134705 cache.go:121] Beginning downloading kic base image for docker with docker
	I0829 18:56:07.991243  134705 out.go:177] * Pulling base image v0.0.44-1724862063-19530 ...
	I0829 18:56:07.993710  134705 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 18:56:07.993819  134705 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0829 18:56:08.019724  134705 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0829 18:56:08.019753  134705 cache.go:56] Caching tarball of preloaded images
	I0829 18:56:08.020166  134705 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 18:56:08.020542  134705 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0829 18:56:08.021015  134705 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0829 18:56:08.021160  134705 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0829 18:56:08.024200  134705 out.go:177] * Downloading Kubernetes v1.31.0 preload ...
	I0829 18:56:08.027090  134705 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0829 18:56:08.071239  134705 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4?checksum=md5:2dd98f97b896d7a4f012ee403b477cc8 -> /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0829 18:56:11.167886  134705 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0829 18:56:11.168173  134705 preload.go:254] verifying checksum of /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0829 18:56:12.791179  134705 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 18:56:12.792012  134705 profile.go:143] Saving config to /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/config.json ...
	I0829 18:56:12.792216  134705 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/config.json: {Name:mk3611c4e54513f149956f1239b25613e86b5212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:17.483753  134705 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 as a tarball
	I0829 18:56:17.483812  134705 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from local cache
	I0829 18:56:44.802215  134705 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from cached tarball
	I0829 18:56:44.802276  134705 cache.go:194] Successfully downloaded all kic artifacts
	I0829 18:56:44.802331  134705 start.go:360] acquireMachinesLock for addons-444829: {Name:mk2c97dbf816088073789d76ed7d47c24d2d85e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:56:44.802633  134705 start.go:364] duration metric: took 271.956µs to acquireMachinesLock for "addons-444829"
	I0829 18:56:44.802709  134705 start.go:93] Provisioning new machine with config: &{Name:addons-444829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-444829 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 18:56:44.802814  134705 start.go:125] createHost starting for "" (driver="docker")
	I0829 18:56:44.807217  134705 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0829 18:56:44.807767  134705 start.go:159] libmachine.API.Create for "addons-444829" (driver="docker")
	I0829 18:56:44.807828  134705 client.go:168] LocalClient.Create starting
	I0829 18:56:44.808072  134705 main.go:141] libmachine: Creating CA: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/certs/ca.pem
	I0829 18:56:45.087124  134705 main.go:141] libmachine: Creating client certificate: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/certs/cert.pem
	I0829 18:56:45.235149  134705 cli_runner.go:164] Run: docker network inspect addons-444829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0829 18:56:45.260583  134705 cli_runner.go:211] docker network inspect addons-444829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0829 18:56:45.260717  134705 network_create.go:284] running [docker network inspect addons-444829] to gather additional debugging logs...
	I0829 18:56:45.260749  134705 cli_runner.go:164] Run: docker network inspect addons-444829
	W0829 18:56:45.289068  134705 cli_runner.go:211] docker network inspect addons-444829 returned with exit code 1
	I0829 18:56:45.289113  134705 network_create.go:287] error running [docker network inspect addons-444829]: docker network inspect addons-444829: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-444829 not found
	I0829 18:56:45.289138  134705 network_create.go:289] output of [docker network inspect addons-444829]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-444829 not found
	
	** /stderr **
	I0829 18:56:45.289333  134705 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0829 18:56:45.317844  134705 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc016c0c140}
	I0829 18:56:45.317911  134705 network_create.go:124] attempt to create docker network addons-444829 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1460 ...
	I0829 18:56:45.318074  134705 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1460 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-444829 addons-444829
	I0829 18:56:45.422667  134705 network_create.go:108] docker network addons-444829 192.168.49.0/24 created
	I0829 18:56:45.422726  134705 kic.go:121] calculated static IP "192.168.49.2" for the "addons-444829" container
	I0829 18:56:45.422868  134705 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0829 18:56:45.449250  134705 cli_runner.go:164] Run: docker volume create addons-444829 --label name.minikube.sigs.k8s.io=addons-444829 --label created_by.minikube.sigs.k8s.io=true
	I0829 18:56:45.477055  134705 oci.go:103] Successfully created a docker volume addons-444829
	I0829 18:56:45.477212  134705 cli_runner.go:164] Run: docker run --rm --name addons-444829-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-444829 --entrypoint /usr/bin/test -v addons-444829:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -d /var/lib
	I0829 18:56:49.812403  134705 cli_runner.go:217] Completed: docker run --rm --name addons-444829-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-444829 --entrypoint /usr/bin/test -v addons-444829:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -d /var/lib: (4.335129071s)
	I0829 18:56:49.812510  134705 oci.go:107] Successfully prepared a docker volume addons-444829
	I0829 18:56:49.812567  134705 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 18:56:49.812601  134705 kic.go:194] Starting extracting preloaded images to volume ...
	I0829 18:56:49.812719  134705 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-444829:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0829 18:56:57.741200  134705 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-444829:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -I lz4 -xf /preloaded.tar -C /extractDir: (7.928413712s)
	I0829 18:56:57.741265  134705 kic.go:203] duration metric: took 7.928659658s to extract preloaded images to volume ...
	W0829 18:56:57.741498  134705 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0829 18:56:57.741568  134705 oci.go:243] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0829 18:56:57.741682  134705 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0829 18:56:57.854244  134705 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-444829 --name addons-444829 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-444829 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-444829 --network addons-444829 --ip 192.168.49.2 --volume addons-444829:/var --security-opt apparmor=unconfined --memory=4000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0
	I0829 18:56:58.357702  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Running}}
	I0829 18:56:58.414066  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:56:58.461624  134705 cli_runner.go:164] Run: docker exec addons-444829 stat /var/lib/dpkg/alternatives/iptables
	I0829 18:56:58.585645  134705 oci.go:144] the created container "addons-444829" has a running status.
	I0829 18:56:58.585693  134705 kic.go:225] Creating ssh key for kic: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa...
	I0829 18:56:58.801397  134705 kic_runner.go:191] docker (temp): /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0829 18:56:58.866941  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:56:58.918666  134705 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0829 18:56:58.918696  134705 kic_runner.go:114] Args: [docker exec --privileged addons-444829 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0829 18:56:59.110717  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:56:59.169097  134705 machine.go:93] provisionDockerMachine start ...
	I0829 18:56:59.170215  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:56:59.232201  134705 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:59.232587  134705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I0829 18:56:59.232603  134705 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 18:56:59.486809  134705 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-444829
	
	I0829 18:56:59.486841  134705 ubuntu.go:169] provisioning hostname "addons-444829"
	I0829 18:56:59.487059  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:56:59.544005  134705 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:59.544375  134705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I0829 18:56:59.544403  134705 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-444829 && echo "addons-444829" | sudo tee /etc/hostname
	I0829 18:56:59.765178  134705 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-444829
	
	I0829 18:56:59.765331  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:56:59.814126  134705 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:59.814463  134705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I0829 18:56:59.814497  134705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-444829' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-444829/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-444829' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:56:59.981473  134705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:56:59.981514  134705 ubuntu.go:175] set auth options {CertDir:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube CaCertPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/certs/ca.pem CaPrivateKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/server.pem ServerKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/server-key.pem ClientKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube}
	I0829 18:56:59.981580  134705 ubuntu.go:177] setting up certificates
	I0829 18:56:59.981606  134705 provision.go:84] configureAuth start
	I0829 18:56:59.981745  134705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-444829
	I0829 18:57:00.022133  134705 provision.go:143] copyHostCerts
	I0829 18:57:00.022262  134705 exec_runner.go:151] cp: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/certs/ca.pem --> /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/ca.pem (1119 bytes)
	I0829 18:57:00.022424  134705 exec_runner.go:151] cp: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/certs/cert.pem --> /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/cert.pem (1164 bytes)
	I0829 18:57:00.022600  134705 exec_runner.go:151] cp: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/certs/key.pem --> /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/key.pem (1679 bytes)
	I0829 18:57:00.022747  134705 provision.go:117] generating server cert: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/server.pem ca-key=/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/certs/ca.pem private-key=/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/certs/ca-key.pem org=g528047478195_compute.addons-444829 san=[127.0.0.1 192.168.49.2 addons-444829 localhost minikube]
	I0829 18:57:00.432720  134705 provision.go:177] copyRemoteCerts
	I0829 18:57:00.432875  134705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:57:00.433061  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:00.478740  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:00.611798  134705 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 18:57:00.669643  134705 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1119 bytes)
	I0829 18:57:00.720249  134705 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0829 18:57:00.772071  134705 provision.go:87] duration metric: took 790.443668ms to configureAuth
	I0829 18:57:00.772106  134705 ubuntu.go:193] setting minikube options for container-runtime
	I0829 18:57:00.772472  134705 config.go:182] Loaded profile config "addons-444829": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:57:00.772656  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:00.819708  134705 main.go:141] libmachine: Using SSH client type: native
	I0829 18:57:00.821016  134705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I0829 18:57:00.821111  134705 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0829 18:57:01.007690  134705 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0829 18:57:01.007721  134705 ubuntu.go:71] root file system type: overlay
	I0829 18:57:01.008272  134705 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0829 18:57:01.008461  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:01.067354  134705 main.go:141] libmachine: Using SSH client type: native
	I0829 18:57:01.067802  134705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I0829 18:57:01.067969  134705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0829 18:57:01.257309  134705 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0829 18:57:01.257462  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:01.299800  134705 main.go:141] libmachine: Using SSH client type: native
	I0829 18:57:01.300188  134705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I0829 18:57:01.300229  134705 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0829 18:57:03.805144  134705 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-08-27 14:13:43.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-08-29 18:57:01.252184760 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0829 18:57:03.805206  134705 machine.go:96] duration metric: took 4.636075931s to provisionDockerMachine
	I0829 18:57:03.805257  134705 client.go:171] duration metric: took 18.997407264s to LocalClient.Create
	I0829 18:57:03.805328  134705 start.go:167] duration metric: took 18.997545418s to libmachine.API.Create "addons-444829"
	I0829 18:57:03.805383  134705 start.go:293] postStartSetup for "addons-444829" (driver="docker")
	I0829 18:57:03.805451  134705 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:57:03.806380  134705 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:57:03.806785  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:03.893709  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:04.103435  134705 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:57:04.124481  134705 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0829 18:57:04.124812  134705 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0829 18:57:04.124943  134705 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0829 18:57:04.125083  134705 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0829 18:57:04.125173  134705 filesync.go:126] Scanning /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/addons for local assets ...
	I0829 18:57:04.125674  134705 filesync.go:126] Scanning /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/files for local assets ...
	I0829 18:57:04.125734  134705 start.go:296] duration metric: took 320.319736ms for postStartSetup
	I0829 18:57:04.127651  134705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-444829
	I0829 18:57:04.228905  134705 profile.go:143] Saving config to /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/config.json ...
	I0829 18:57:04.230041  134705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:57:04.230303  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:04.332645  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:04.504935  134705 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0829 18:57:04.526846  134705 start.go:128] duration metric: took 19.72401053s to createHost
	I0829 18:57:04.526884  134705 start.go:83] releasing machines lock for "addons-444829", held for 19.724229538s
	I0829 18:57:04.527343  134705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-444829
	I0829 18:57:04.620723  134705 ssh_runner.go:195] Run: cat /version.json
	I0829 18:57:04.620901  134705 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:57:04.621118  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:04.621243  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:04.770799  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:04.779966  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:05.287433  134705 ssh_runner.go:195] Run: systemctl --version
	I0829 18:57:05.309018  134705 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0829 18:57:05.338455  134705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0829 18:57:05.522656  134705 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0829 18:57:05.523330  134705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:57:05.734036  134705 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 18:57:05.734165  134705 start.go:495] detecting cgroup driver to use...
	I0829 18:57:05.734407  134705 detect.go:190] detected "systemd" cgroup driver on host os
	I0829 18:57:05.734930  134705 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:57:05.781501  134705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0829 18:57:05.804056  134705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0829 18:57:05.825858  134705 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0829 18:57:05.826029  134705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0829 18:57:05.847434  134705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0829 18:57:05.868885  134705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0829 18:57:05.890335  134705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0829 18:57:05.911843  134705 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:57:05.931747  134705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0829 18:57:05.953125  134705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0829 18:57:05.974193  134705 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0829 18:57:05.996070  134705 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:57:06.015428  134705 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:57:06.035621  134705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:57:06.288103  134705 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0829 18:57:06.495493  134705 start.go:495] detecting cgroup driver to use...
	I0829 18:57:06.495559  134705 detect.go:190] detected "systemd" cgroup driver on host os
	I0829 18:57:06.495655  134705 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0829 18:57:06.613737  134705 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0829 18:57:06.613867  134705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0829 18:57:06.650721  134705 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:57:06.695201  134705 ssh_runner.go:195] Run: which cri-dockerd
	I0829 18:57:06.703591  134705 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0829 18:57:06.725688  134705 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0829 18:57:06.768638  134705 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0829 18:57:07.160196  134705 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0829 18:57:07.387621  134705 docker.go:574] configuring docker to use "systemd" as cgroup driver...
	I0829 18:57:07.387807  134705 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0829 18:57:07.418263  134705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:57:07.550818  134705 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0829 18:57:08.007924  134705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0829 18:57:08.028284  134705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0829 18:57:08.048006  134705 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0829 18:57:08.196103  134705 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0829 18:57:08.339402  134705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:57:08.475664  134705 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0829 18:57:08.503223  134705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0829 18:57:08.521637  134705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:57:08.659315  134705 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0829 18:57:08.828162  134705 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0829 18:57:08.828654  134705 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0829 18:57:08.838045  134705 start.go:563] Will wait 60s for crictl version
	I0829 18:57:08.838175  134705 ssh_runner.go:195] Run: which crictl
	I0829 18:57:08.846581  134705 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:57:08.913108  134705 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0829 18:57:08.913246  134705 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0829 18:57:08.954757  134705 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0829 18:57:09.001142  134705 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0829 18:57:09.001331  134705 cli_runner.go:164] Run: docker network inspect addons-444829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0829 18:57:09.029052  134705 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0829 18:57:09.035121  134705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:57:09.059790  134705 out.go:177]   - kubelet.cgroups-per-qos=false
	I0829 18:57:09.064038  134705 out.go:177]   - kubelet.enforce-node-allocatable=""
	I0829 18:57:09.096250  134705 kubeadm.go:883] updating cluster {Name:addons-444829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-444829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 18:57:09.096489  134705 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 18:57:09.096793  134705 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0829 18:57:09.128772  134705 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0829 18:57:09.128802  134705 docker.go:615] Images already preloaded, skipping extraction
	I0829 18:57:09.128935  134705 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0829 18:57:09.165289  134705 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0829 18:57:09.165427  134705 cache_images.go:84] Images are preloaded, skipping loading
	I0829 18:57:09.165510  134705 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 docker true true} ...
	I0829 18:57:09.165727  134705 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable="" --hostname-override=addons-444829 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-444829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:57:09.165863  134705 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0829 18:57:09.252148  134705 cni.go:84] Creating CNI manager for ""
	I0829 18:57:09.252240  134705 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 18:57:09.252284  134705 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 18:57:09.252369  134705 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-444829 NodeName:addons-444829 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 18:57:09.252733  134705 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-444829"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 18:57:09.252871  134705 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:57:09.268367  134705 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 18:57:09.268499  134705 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 18:57:09.283907  134705 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (366 bytes)
	I0829 18:57:09.313685  134705 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:57:09.344340  134705 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0829 18:57:09.375295  134705 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0829 18:57:09.381545  134705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:57:09.400099  134705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:57:09.540421  134705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:57:09.569820  134705 certs.go:68] Setting up /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829 for IP: 192.168.49.2
	I0829 18:57:09.569849  134705 certs.go:194] generating shared ca certs ...
	I0829 18:57:09.569878  134705 certs.go:226] acquiring lock for ca certs: {Name:mk21d3bea2fe5461974fdbcf3ba1e6e6234ebc34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:57:09.570273  134705 certs.go:240] generating "minikubeCA" ca cert: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/ca.key
	I0829 18:57:09.724795  134705 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/ca.crt ...
	I0829 18:57:09.724837  134705 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/ca.crt: {Name:mk52a5dda5bdbee77704169257f2a0a654c8763d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:57:09.725286  134705 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/ca.key ...
	I0829 18:57:09.725314  134705 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/ca.key: {Name:mk47e46652539daa71a8246bbd39b43a60120708 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:57:09.725644  134705 certs.go:240] generating "proxyClientCA" ca cert: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/proxy-client-ca.key
	I0829 18:57:09.975718  134705 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/proxy-client-ca.crt ...
	I0829 18:57:09.975762  134705 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/proxy-client-ca.crt: {Name:mk709a54d48c74c112b8108ac1e0183b7bf33a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:57:09.976278  134705 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/proxy-client-ca.key ...
	I0829 18:57:09.976306  134705 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/proxy-client-ca.key: {Name:mk812c7ef99dd7833020c1c3075166866e4061c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:57:09.976694  134705 certs.go:256] generating profile certs ...
	I0829 18:57:09.976876  134705 certs.go:363] generating signed profile cert for "minikube-user": /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.key
	I0829 18:57:09.976911  134705 crypto.go:68] Generating cert /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt with IP's: []
	I0829 18:57:10.092966  134705 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt ...
	I0829 18:57:10.093021  134705 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt: {Name:mkcc9571dacccd987894421c67883fc12dccef84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:57:10.093506  134705 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.key ...
	I0829 18:57:10.093571  134705 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.key: {Name:mk340a0c3c7289f5b01dedb11934dc277204616f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:57:10.093939  134705 certs.go:363] generating signed profile cert for "minikube": /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/apiserver.key.1e5cfe88
	I0829 18:57:10.094041  134705 crypto.go:68] Generating cert /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/apiserver.crt.1e5cfe88 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0829 18:57:10.513972  134705 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/apiserver.crt.1e5cfe88 ...
	I0829 18:57:10.514015  134705 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/apiserver.crt.1e5cfe88: {Name:mk0c2ae86c7d1d068d25c19bfd8691392d292f9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:57:10.514511  134705 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/apiserver.key.1e5cfe88 ...
	I0829 18:57:10.514542  134705 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/apiserver.key.1e5cfe88: {Name:mk94765a9974d3187d538c458ff2f64b82dad0a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:57:10.514880  134705 certs.go:381] copying /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/apiserver.crt.1e5cfe88 -> /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/apiserver.crt
	I0829 18:57:10.515115  134705 certs.go:385] copying /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/apiserver.key.1e5cfe88 -> /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/apiserver.key
	I0829 18:57:10.515255  134705 certs.go:363] generating signed profile cert for "aggregator": /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/proxy-client.key
	I0829 18:57:10.515296  134705 crypto.go:68] Generating cert /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/proxy-client.crt with IP's: []
	I0829 18:57:10.659099  134705 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/proxy-client.crt ...
	I0829 18:57:10.659137  134705 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/proxy-client.crt: {Name:mk4c9bcb82b4bd0e1ee2361a46eaf10d3b5d0e83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:57:10.659597  134705 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/proxy-client.key ...
	I0829 18:57:10.659626  134705 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/proxy-client.key: {Name:mk0278bc73e7e3966be3a382598487f26a606861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:57:10.660168  134705 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 18:57:10.660240  134705 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/certs/ca.pem (1119 bytes)
	I0829 18:57:10.660297  134705 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/certs/cert.pem (1164 bytes)
	I0829 18:57:10.660381  134705 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/certs/key.pem (1679 bytes)
	I0829 18:57:10.711292  134705 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:57:10.753511  134705 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 18:57:10.793922  134705 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:57:10.834652  134705 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 18:57:10.876902  134705 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0829 18:57:10.919556  134705 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 18:57:10.961795  134705 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:57:11.003499  134705 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 18:57:11.044883  134705 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:57:11.087498  134705 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 18:57:11.118645  134705 ssh_runner.go:195] Run: openssl version
	I0829 18:57:11.129298  134705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:57:11.151437  134705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:57:11.164917  134705 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:57:11.165045  134705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:57:11.181020  134705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:57:11.202202  134705 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:57:11.210873  134705 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 18:57:11.210998  134705 kubeadm.go:392] StartCluster: {Name:addons-444829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-444829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:57:11.211244  134705 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0829 18:57:11.254467  134705 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 18:57:11.274816  134705 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 18:57:11.293874  134705 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0829 18:57:11.294060  134705 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 18:57:11.310584  134705 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 18:57:11.310609  134705 kubeadm.go:157] found existing configuration files:
	
	I0829 18:57:11.310721  134705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 18:57:11.327187  134705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 18:57:11.327418  134705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 18:57:11.342220  134705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 18:57:11.357396  134705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 18:57:11.357636  134705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 18:57:11.372473  134705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 18:57:11.387478  134705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 18:57:11.387670  134705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 18:57:11.402416  134705 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 18:57:11.417622  134705 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 18:57:11.417837  134705 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 18:57:11.432240  134705 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0829 18:57:11.931787  134705 kubeadm.go:310] W0829 18:57:11.930622    1681 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:57:11.933476  134705 kubeadm.go:310] W0829 18:57:11.932629    1681 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:57:12.158776  134705 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 18:57:25.660245  134705 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 18:57:25.660344  134705 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 18:57:25.660456  134705 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 18:57:25.660665  134705 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 18:57:25.660879  134705 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 18:57:25.661069  134705 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 18:57:25.664329  134705 out.go:235]   - Generating certificates and keys ...
	I0829 18:57:25.664569  134705 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 18:57:25.664688  134705 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 18:57:25.664852  134705 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 18:57:25.664975  134705 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 18:57:25.665100  134705 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 18:57:25.665202  134705 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 18:57:25.665308  134705 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 18:57:25.665521  134705 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-444829 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0829 18:57:25.665623  134705 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 18:57:25.665835  134705 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-444829 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0829 18:57:25.666017  134705 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 18:57:25.666211  134705 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 18:57:25.666303  134705 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 18:57:25.666422  134705 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 18:57:25.666523  134705 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 18:57:25.666653  134705 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 18:57:25.666767  134705 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 18:57:25.666897  134705 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 18:57:25.667049  134705 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 18:57:25.667206  134705 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 18:57:25.667355  134705 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 18:57:25.670790  134705 out.go:235]   - Booting up control plane ...
	I0829 18:57:25.670971  134705 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 18:57:25.671106  134705 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 18:57:25.671229  134705 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 18:57:25.671455  134705 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 18:57:25.671611  134705 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 18:57:25.671692  134705 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 18:57:25.671924  134705 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 18:57:25.672246  134705 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 18:57:25.672365  134705 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001664296s
	I0829 18:57:25.672572  134705 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 18:57:25.672684  134705 kubeadm.go:310] [api-check] The API server is healthy after 7.503785202s
	I0829 18:57:25.672876  134705 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 18:57:25.673136  134705 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 18:57:25.673249  134705 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 18:57:25.673545  134705 kubeadm.go:310] [mark-control-plane] Marking the node addons-444829 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 18:57:25.673647  134705 kubeadm.go:310] [bootstrap-token] Using token: gwjaey.srvnqw9r4afp9jhz
	I0829 18:57:25.676287  134705 out.go:235]   - Configuring RBAC rules ...
	I0829 18:57:25.676563  134705 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 18:57:25.676860  134705 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 18:57:25.677249  134705 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 18:57:25.677625  134705 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 18:57:25.677832  134705 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 18:57:25.678001  134705 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 18:57:25.678223  134705 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 18:57:25.678319  134705 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 18:57:25.678415  134705 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 18:57:25.678431  134705 kubeadm.go:310] 
	I0829 18:57:25.678542  134705 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 18:57:25.678557  134705 kubeadm.go:310] 
	I0829 18:57:25.678748  134705 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 18:57:25.678766  134705 kubeadm.go:310] 
	I0829 18:57:25.678818  134705 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 18:57:25.679558  134705 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 18:57:25.679671  134705 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 18:57:25.679692  134705 kubeadm.go:310] 
	I0829 18:57:25.679796  134705 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 18:57:25.679811  134705 kubeadm.go:310] 
	I0829 18:57:25.679909  134705 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 18:57:25.679923  134705 kubeadm.go:310] 
	I0829 18:57:25.680068  134705 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 18:57:25.680219  134705 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 18:57:25.680359  134705 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 18:57:25.680373  134705 kubeadm.go:310] 
	I0829 18:57:25.680538  134705 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 18:57:25.680815  134705 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 18:57:25.680875  134705 kubeadm.go:310] 
	I0829 18:57:25.681085  134705 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token gwjaey.srvnqw9r4afp9jhz \
	I0829 18:57:25.681307  134705 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3200e96fc4393bb3743bb2ca3f6ef72cf9b7114b5a2a6e2b88e4283e83382e66 \
	I0829 18:57:25.681374  134705 kubeadm.go:310] 	--control-plane 
	I0829 18:57:25.681388  134705 kubeadm.go:310] 
	I0829 18:57:25.681565  134705 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 18:57:25.681579  134705 kubeadm.go:310] 
	I0829 18:57:25.681754  134705 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token gwjaey.srvnqw9r4afp9jhz \
	I0829 18:57:25.682005  134705 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3200e96fc4393bb3743bb2ca3f6ef72cf9b7114b5a2a6e2b88e4283e83382e66 
	I0829 18:57:25.682026  134705 cni.go:84] Creating CNI manager for ""
	I0829 18:57:25.682051  134705 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 18:57:25.686018  134705 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 18:57:25.690222  134705 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 18:57:25.710764  134705 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 18:57:25.744113  134705 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 18:57:25.744326  134705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:57:25.744450  134705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-444829 minikube.k8s.io/updated_at=2024_08_29T18_57_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=addons-444829 minikube.k8s.io/primary=true
	I0829 18:57:26.332830  134705 ops.go:34] apiserver oom_adj: -16
	I0829 18:57:26.333039  134705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:57:26.833552  134705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:57:27.333943  134705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:57:27.833922  134705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:57:28.333350  134705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:57:28.833999  134705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:57:29.003665  134705 kubeadm.go:1113] duration metric: took 3.259432753s to wait for elevateKubeSystemPrivileges
	I0829 18:57:29.003711  134705 kubeadm.go:394] duration metric: took 17.792765671s to StartCluster
	I0829 18:57:29.003743  134705 settings.go:142] acquiring lock: {Name:mk39f4bf91ec109899884d28976db52e209ca4f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:57:29.004162  134705 settings.go:150] Updating kubeconfig:  /home/g528047478195_compute/minikube-integration/19530-128633/kubeconfig
	I0829 18:57:29.005147  134705 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19530-128633/kubeconfig: {Name:mk61be63fdfa8ad6e29e4e9b5cb8ab99227aaa64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:57:29.005665  134705 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 18:57:29.005892  134705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0829 18:57:29.006437  134705 config.go:182] Loaded profile config "addons-444829": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:57:29.006485  134705 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0829 18:57:29.006640  134705 addons.go:69] Setting yakd=true in profile "addons-444829"
	I0829 18:57:29.006713  134705 addons.go:234] Setting addon yakd=true in "addons-444829"
	I0829 18:57:29.006765  134705 host.go:66] Checking if "addons-444829" exists ...
	I0829 18:57:29.008119  134705 addons.go:69] Setting inspektor-gadget=true in profile "addons-444829"
	I0829 18:57:29.008183  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:57:29.008225  134705 addons.go:234] Setting addon inspektor-gadget=true in "addons-444829"
	I0829 18:57:29.008265  134705 host.go:66] Checking if "addons-444829" exists ...
	I0829 18:57:29.009176  134705 addons.go:69] Setting metrics-server=true in profile "addons-444829"
	I0829 18:57:29.009256  134705 addons.go:234] Setting addon metrics-server=true in "addons-444829"
	I0829 18:57:29.009298  134705 host.go:66] Checking if "addons-444829" exists ...
	I0829 18:57:29.009345  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:57:29.011112  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:57:29.011199  134705 addons.go:69] Setting cloud-spanner=true in profile "addons-444829"
	I0829 18:57:29.015146  134705 addons.go:234] Setting addon cloud-spanner=true in "addons-444829"
	I0829 18:57:29.015201  134705 host.go:66] Checking if "addons-444829" exists ...
	I0829 18:57:29.016285  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:57:29.018837  134705 out.go:177] * Verifying Kubernetes components...
	I0829 18:57:29.011208  134705 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-444829"
	I0829 18:57:29.011214  134705 addons.go:69] Setting default-storageclass=true in profile "addons-444829"
	I0829 18:57:29.011220  134705 addons.go:69] Setting gcp-auth=true in profile "addons-444829"
	I0829 18:57:29.011225  134705 addons.go:69] Setting helm-tiller=true in profile "addons-444829"
	I0829 18:57:29.011241  134705 addons.go:69] Setting ingress=true in profile "addons-444829"
	I0829 18:57:29.011258  134705 addons.go:69] Setting ingress-dns=true in profile "addons-444829"
	I0829 18:57:29.011372  134705 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-444829"
	I0829 18:57:29.011378  134705 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-444829"
	I0829 18:57:29.011383  134705 addons.go:69] Setting registry=true in profile "addons-444829"
	I0829 18:57:29.011394  134705 addons.go:69] Setting storage-provisioner=true in profile "addons-444829"
	I0829 18:57:29.011416  134705 addons.go:69] Setting volumesnapshots=true in profile "addons-444829"
	I0829 18:57:29.011427  134705 addons.go:69] Setting volcano=true in profile "addons-444829"
	I0829 18:57:29.022309  134705 addons.go:234] Setting addon volcano=true in "addons-444829"
	I0829 18:57:29.022392  134705 host.go:66] Checking if "addons-444829" exists ...
	I0829 18:57:29.023435  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:57:29.044321  134705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:57:29.044615  134705 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-444829"
	I0829 18:57:29.044687  134705 host.go:66] Checking if "addons-444829" exists ...
	I0829 18:57:29.045863  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:57:29.055775  134705 addons.go:234] Setting addon ingress-dns=true in "addons-444829"
	I0829 18:57:29.055886  134705 host.go:66] Checking if "addons-444829" exists ...
	I0829 18:57:29.056774  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:57:29.073378  134705 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-444829"
	I0829 18:57:29.074111  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:57:29.075158  134705 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-444829"
	I0829 18:57:29.075764  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:57:29.113251  134705 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-444829"
	I0829 18:57:29.113465  134705 host.go:66] Checking if "addons-444829" exists ...
	I0829 18:57:29.114511  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:57:29.115053  134705 mustload.go:65] Loading cluster: addons-444829
	I0829 18:57:29.123851  134705 addons.go:234] Setting addon helm-tiller=true in "addons-444829"
	I0829 18:57:29.123963  134705 host.go:66] Checking if "addons-444829" exists ...
	I0829 18:57:29.124932  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:57:29.135111  134705 addons.go:234] Setting addon registry=true in "addons-444829"
	I0829 18:57:29.139036  134705 host.go:66] Checking if "addons-444829" exists ...
	I0829 18:57:29.143428  134705 addons.go:234] Setting addon ingress=true in "addons-444829"
	I0829 18:57:29.143523  134705 host.go:66] Checking if "addons-444829" exists ...
	I0829 18:57:29.145601  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:57:29.146178  134705 addons.go:234] Setting addon storage-provisioner=true in "addons-444829"
	I0829 18:57:29.146283  134705 host.go:66] Checking if "addons-444829" exists ...
	I0829 18:57:29.150762  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:57:29.201362  134705 addons.go:234] Setting addon volumesnapshots=true in "addons-444829"
	I0829 18:57:29.201576  134705 host.go:66] Checking if "addons-444829" exists ...
	I0829 18:57:29.202753  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:57:29.294349  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:57:29.327805  134705 config.go:182] Loaded profile config "addons-444829": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:57:29.376768  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:57:29.379100  134705 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0829 18:57:29.404400  134705 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0829 18:57:29.404668  134705 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 18:57:29.404728  134705 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 18:57:29.404872  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:29.411219  134705 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0829 18:57:29.411597  134705 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0829 18:57:29.411682  134705 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0829 18:57:29.411927  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:29.421258  134705 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0829 18:57:29.421374  134705 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0829 18:57:29.421552  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:29.535527  134705 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0829 18:57:29.549303  134705 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0829 18:57:29.553003  134705 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0829 18:57:29.566266  134705 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0829 18:57:29.566376  134705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0829 18:57:29.566518  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:29.679861  134705 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0829 18:57:29.685608  134705 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:57:29.685737  134705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0829 18:57:29.685886  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:29.760026  134705 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0829 18:57:29.801283  134705 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:57:29.801314  134705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0829 18:57:29.801442  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:29.838641  134705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0829 18:57:29.838885  134705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:57:29.883644  134705 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0829 18:57:29.889219  134705 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0829 18:57:29.889275  134705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0829 18:57:29.889530  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:29.941926  134705 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0829 18:57:30.043469  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:30.044034  134705 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0829 18:57:30.047717  134705 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0829 18:57:30.054026  134705 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0829 18:57:30.102203  134705 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0829 18:57:30.105740  134705 out.go:177]   - Using image docker.io/registry:2.8.3
	I0829 18:57:30.108917  134705 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0829 18:57:30.108963  134705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0829 18:57:30.109093  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:30.144282  134705 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0829 18:57:30.147567  134705 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0829 18:57:30.155259  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:30.155860  134705 host.go:66] Checking if "addons-444829" exists ...
	I0829 18:57:30.158436  134705 addons.go:234] Setting addon default-storageclass=true in "addons-444829"
	I0829 18:57:30.158552  134705 host.go:66] Checking if "addons-444829" exists ...
	I0829 18:57:30.159599  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:57:30.161122  134705 cli_runner.go:217] Completed: docker container inspect addons-444829 --format={{.State.Status}}: (1.015468476s)
	I0829 18:57:30.162346  134705 cli_runner.go:217] Completed: docker container inspect addons-444829 --format={{.State.Status}}: (1.037360147s)
	I0829 18:57:30.162588  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:30.162737  134705 cli_runner.go:217] Completed: docker container inspect addons-444829 --format={{.State.Status}}: (1.011869827s)
	I0829 18:57:30.163523  134705 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0829 18:57:30.172212  134705 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-444829"
	I0829 18:57:30.172277  134705 host.go:66] Checking if "addons-444829" exists ...
	I0829 18:57:30.173397  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:57:30.173999  134705 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0829 18:57:30.182254  134705 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0829 18:57:30.186269  134705 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:57:30.187600  134705 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0829 18:57:30.187639  134705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0829 18:57:30.187752  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:30.188288  134705 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 18:57:30.189377  134705 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0829 18:57:30.189401  134705 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0829 18:57:30.189517  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:30.219411  134705 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:57:30.222989  134705 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0829 18:57:30.226352  134705 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:57:30.226398  134705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0829 18:57:30.226546  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:30.232886  134705 cli_runner.go:217] Completed: docker container inspect addons-444829 --format={{.State.Status}}: (1.030013809s)
	I0829 18:57:30.233766  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:30.234341  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:30.235688  134705 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:57:30.235788  134705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 18:57:30.239059  134705 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0829 18:57:30.239670  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:30.244052  134705 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0829 18:57:30.244086  134705 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0829 18:57:30.244206  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:30.452251  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:30.454505  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:30.539054  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:30.546083  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:30.565077  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:30.577941  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:30.667564  134705 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 18:57:30.667595  134705 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 18:57:30.667716  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:30.675928  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:30.697629  134705 out.go:177]   - Using image docker.io/busybox:stable
	I0829 18:57:30.700786  134705 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0829 18:57:30.703970  134705 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:57:30.704005  134705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0829 18:57:30.704130  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:30.730179  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:30.764778  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:30.783545  134705 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 18:57:30.783574  134705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0829 18:57:30.810027  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:31.002996  134705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0829 18:57:31.054418  134705 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0829 18:57:31.054455  134705 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0829 18:57:31.102001  134705 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 18:57:31.102036  134705 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 18:57:31.109177  134705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:57:31.180380  134705 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0829 18:57:31.180416  134705 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0829 18:57:31.288498  134705 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0829 18:57:31.288532  134705 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0829 18:57:31.333620  134705 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0829 18:57:31.333664  134705 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0829 18:57:31.339429  134705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:57:31.378498  134705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:57:31.439106  134705 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:57:31.439229  134705 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 18:57:31.505345  134705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:57:31.577211  134705 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0829 18:57:31.577255  134705 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0829 18:57:31.642258  134705 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0829 18:57:31.642290  134705 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0829 18:57:31.710148  134705 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0829 18:57:31.710199  134705 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0829 18:57:31.719024  134705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0829 18:57:31.755257  134705 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0829 18:57:31.755290  134705 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0829 18:57:31.801020  134705 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0829 18:57:31.801054  134705 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0829 18:57:31.835371  134705 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0829 18:57:31.835403  134705 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0829 18:57:31.845026  134705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:57:31.892262  134705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:57:31.907491  134705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 18:57:32.086593  134705 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0829 18:57:32.086637  134705 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0829 18:57:32.108336  134705 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:57:32.108365  134705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0829 18:57:32.127187  134705 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0829 18:57:32.127249  134705 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0829 18:57:32.157787  134705 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0829 18:57:32.157822  134705 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0829 18:57:32.263237  134705 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0829 18:57:32.263270  134705 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0829 18:57:32.306390  134705 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:57:32.306424  134705 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0829 18:57:32.704788  134705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:57:32.717609  134705 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0829 18:57:32.717724  134705 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0829 18:57:32.733382  134705 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:57:32.733495  134705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0829 18:57:32.778730  134705 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0829 18:57:32.778770  134705 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0829 18:57:32.798262  134705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:57:32.866894  134705 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0829 18:57:32.866933  134705 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0829 18:57:33.190605  134705 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0829 18:57:33.190644  134705 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0829 18:57:33.238645  134705 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0829 18:57:33.238684  134705 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0829 18:57:33.286539  134705 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0829 18:57:33.286580  134705 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0829 18:57:33.337091  134705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:57:33.569504  134705 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0829 18:57:33.569541  134705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0829 18:57:33.605836  134705 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:57:33.605873  134705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0829 18:57:33.674301  134705 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:57:33.674335  134705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0829 18:57:33.891927  134705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:57:33.897749  134705 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0829 18:57:33.897786  134705 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0829 18:57:33.915202  134705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:57:33.947100  134705 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.1081778s)
	I0829 18:57:33.948524  134705 node_ready.go:35] waiting up to 6m0s for node "addons-444829" to be "Ready" ...
	I0829 18:57:33.948790  134705 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.110111476s)
	I0829 18:57:33.948824  134705 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0829 18:57:34.367323  134705 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0829 18:57:34.367357  134705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0829 18:57:34.597293  134705 node_ready.go:49] node "addons-444829" has status "Ready":"True"
	I0829 18:57:34.597331  134705 node_ready.go:38] duration metric: took 648.771409ms for node "addons-444829" to be "Ready" ...
	I0829 18:57:34.597348  134705 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:57:35.149481  134705 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0829 18:57:35.149543  134705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0829 18:57:35.398689  134705 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:57:35.398743  134705 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0829 18:57:35.711637  134705 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-444829" context rescaled to 1 replicas
	I0829 18:57:35.958052  134705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:57:36.240809  134705 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ppj8d" in "kube-system" namespace to be "Ready" ...
	I0829 18:57:38.436555  134705 pod_ready.go:103] pod "coredns-6f6b679f8f-ppj8d" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:57:34 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0829 18:57:40.680242  134705 pod_ready.go:103] pod "coredns-6f6b679f8f-ppj8d" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:57:34 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0829 18:57:42.811805  134705 pod_ready.go:103] pod "coredns-6f6b679f8f-ppj8d" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:45.038321  134705 pod_ready.go:103] pod "coredns-6f6b679f8f-ppj8d" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:47.636804  134705 pod_ready.go:103] pod "coredns-6f6b679f8f-ppj8d" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:47.927849  134705 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0829 18:57:47.928009  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:47.991022  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:48.210451  134705 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0829 18:57:48.275961  134705 addons.go:234] Setting addon gcp-auth=true in "addons-444829"
	I0829 18:57:48.276105  134705 host.go:66] Checking if "addons-444829" exists ...
	I0829 18:57:48.276928  134705 cli_runner.go:164] Run: docker container inspect addons-444829 --format={{.State.Status}}
	I0829 18:57:48.376451  134705 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0829 18:57:48.376558  134705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444829
	I0829 18:57:48.447743  134705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/addons-444829/id_rsa Username:docker}
	I0829 18:57:49.768098  134705 pod_ready.go:103] pod "coredns-6f6b679f8f-ppj8d" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:51.797390  134705 pod_ready.go:103] pod "coredns-6f6b679f8f-ppj8d" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:54.582638  134705 pod_ready.go:103] pod "coredns-6f6b679f8f-ppj8d" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:56.655571  134705 pod_ready.go:103] pod "coredns-6f6b679f8f-ppj8d" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:56.795919  134705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (25.792748262s)
	I0829 18:57:56.796184  134705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (25.686967299s)
	I0829 18:57:56.796226  134705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (25.456770653s)
	I0829 18:57:56.796333  134705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (25.417806238s)
	I0829 18:57:56.796562  134705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (25.291170789s)
	I0829 18:57:56.798015  134705 addons.go:475] Verifying addon ingress=true in "addons-444829"
	I0829 18:57:56.796631  134705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (25.077550331s)
	I0829 18:57:56.796705  134705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (24.951646863s)
	I0829 18:57:56.796806  134705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (24.904507962s)
	I0829 18:57:56.796848  134705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (24.889325969s)
	I0829 18:57:56.796896  134705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (24.091960475s)
	I0829 18:57:56.796941  134705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (23.998643855s)
	I0829 18:57:56.797031  134705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (23.459899739s)
	I0829 18:57:56.797188  134705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (22.905209028s)
	I0829 18:57:56.797295  134705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (22.88205154s)
	I0829 18:57:56.798604  134705 addons.go:475] Verifying addon metrics-server=true in "addons-444829"
	I0829 18:57:56.799005  134705 addons.go:475] Verifying addon registry=true in "addons-444829"
	W0829 18:57:56.799657  134705 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:57:56.801041  134705 retry.go:31] will retry after 281.351302ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:57:56.803927  134705 out.go:177] * Verifying ingress addon...
	I0829 18:57:56.804071  134705 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-444829 service yakd-dashboard -n yakd-dashboard
	
	I0829 18:57:56.804152  134705 out.go:177] * Verifying registry addon...
	I0829 18:57:56.808206  134705 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0829 18:57:56.811859  134705 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0829 18:57:57.083493  134705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:57:57.401303  134705 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0829 18:57:57.401415  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:57.413897  134705 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0829 18:57:57.414031  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0829 18:57:57.482153  134705 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0829 18:57:57.555252  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:57.556459  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:57.973117  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:57.975848  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:58.372842  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:58.375095  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:59.499389  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:59.501122  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:59.797057  134705 pod_ready.go:103] pod "coredns-6f6b679f8f-ppj8d" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:00.126311  134705 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (11.749813716s)
	I0829 18:58:00.127368  134705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (24.169245943s)
	I0829 18:58:00.127531  134705 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-444829"
	I0829 18:58:00.130325  134705 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0829 18:58:00.130598  134705 out.go:177] * Verifying csi-hostpath-driver addon...
	I0829 18:58:00.133097  134705 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:58:00.134772  134705 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0829 18:58:00.136795  134705 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0829 18:58:00.136819  134705 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0829 18:58:00.144291  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:00.158585  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:00.199183  134705 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0829 18:58:00.199289  134705 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0829 18:58:00.295305  134705 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:58:00.295339  134705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0829 18:58:00.384831  134705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:58:00.683120  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:00.685614  134705 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0829 18:58:00.685656  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:00.686852  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:01.158993  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:01.160748  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:01.161710  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:01.562279  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:01.574921  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:01.597806  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:01.670223  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:01.671281  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:01.676230  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:02.021838  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:02.050796  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:02.071305  134705 pod_ready.go:103] pod "coredns-6f6b679f8f-ppj8d" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:02.169166  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:02.475992  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:02.500838  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:02.671063  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:02.772801  134705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.689186389s)
	I0829 18:58:02.873167  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:02.880715  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:03.091253  134705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.706363188s)
	I0829 18:58:03.098706  134705 addons.go:475] Verifying addon gcp-auth=true in "addons-444829"
	I0829 18:58:03.102841  134705 out.go:177] * Verifying gcp-auth addon...
	I0829 18:58:03.107522  134705 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0829 18:58:03.171751  134705 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 18:58:03.174206  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:03.321051  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:03.323235  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:03.652921  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:03.844926  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:03.849465  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:04.150807  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:04.282064  134705 pod_ready.go:103] pod "coredns-6f6b679f8f-ppj8d" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:04.319521  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:04.327584  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:04.648989  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:04.828011  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:04.838984  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:05.157083  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:05.323921  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:05.339135  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:05.654109  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:05.824975  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:05.836883  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:06.159637  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:06.331165  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:06.958749  134705 pod_ready.go:103] pod "coredns-6f6b679f8f-ppj8d" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:06.977115  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:06.987730  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:07.005270  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:07.101288  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:07.152255  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:07.343052  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:07.366437  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:07.649176  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:07.852704  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:07.854649  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:08.144770  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:08.340737  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:08.348100  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:08.652602  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:08.843467  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:08.846852  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:09.152657  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:09.258390  134705 pod_ready.go:103] pod "coredns-6f6b679f8f-ppj8d" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:09.319316  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:09.320827  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:09.645280  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:09.831878  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:09.832431  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:10.145518  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:10.319849  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:10.324634  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:10.645462  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:10.822577  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:10.823450  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:11.145823  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:11.260735  134705 pod_ready.go:103] pod "coredns-6f6b679f8f-ppj8d" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:11.320702  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:11.326402  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:11.646518  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:11.825737  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:11.837273  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:12.142371  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:12.326892  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:12.328348  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:12.645414  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:12.816762  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:12.820681  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:13.151150  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:13.315357  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:13.317058  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:13.646701  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:13.754093  134705 pod_ready.go:103] pod "coredns-6f6b679f8f-ppj8d" in "kube-system" namespace has status "Ready":"False"
	I0829 18:58:13.837339  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:13.839576  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:14.144100  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:14.321656  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:14.323619  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:14.644550  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:14.819126  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:14.822975  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:15.149292  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:15.320216  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:15.321806  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:15.643119  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:15.815443  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:15.824325  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:16.142788  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:16.277214  134705 pod_ready.go:93] pod "coredns-6f6b679f8f-ppj8d" in "kube-system" namespace has status "Ready":"True"
	I0829 18:58:16.277249  134705 pod_ready.go:82] duration metric: took 40.036387986s for pod "coredns-6f6b679f8f-ppj8d" in "kube-system" namespace to be "Ready" ...
	I0829 18:58:16.277265  134705 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-tg4gm" in "kube-system" namespace to be "Ready" ...
	I0829 18:58:16.299700  134705 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-tg4gm" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-tg4gm" not found
	I0829 18:58:16.299738  134705 pod_ready.go:82] duration metric: took 22.460956ms for pod "coredns-6f6b679f8f-tg4gm" in "kube-system" namespace to be "Ready" ...
	E0829 18:58:16.299756  134705 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-tg4gm" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-tg4gm" not found
	I0829 18:58:16.299769  134705 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-444829" in "kube-system" namespace to be "Ready" ...
	I0829 18:58:16.314834  134705 pod_ready.go:93] pod "etcd-addons-444829" in "kube-system" namespace has status "Ready":"True"
	I0829 18:58:16.314872  134705 pod_ready.go:82] duration metric: took 15.087886ms for pod "etcd-addons-444829" in "kube-system" namespace to be "Ready" ...
	I0829 18:58:16.314889  134705 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-444829" in "kube-system" namespace to be "Ready" ...
	I0829 18:58:16.322693  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:16.334623  134705 pod_ready.go:93] pod "kube-apiserver-addons-444829" in "kube-system" namespace has status "Ready":"True"
	I0829 18:58:16.334658  134705 pod_ready.go:82] duration metric: took 19.75696ms for pod "kube-apiserver-addons-444829" in "kube-system" namespace to be "Ready" ...
	I0829 18:58:16.334676  134705 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-444829" in "kube-system" namespace to be "Ready" ...
	I0829 18:58:16.343167  134705 pod_ready.go:93] pod "kube-controller-manager-addons-444829" in "kube-system" namespace has status "Ready":"True"
	I0829 18:58:16.343200  134705 pod_ready.go:82] duration metric: took 8.512975ms for pod "kube-controller-manager-addons-444829" in "kube-system" namespace to be "Ready" ...
	I0829 18:58:16.343218  134705 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lrr49" in "kube-system" namespace to be "Ready" ...
	I0829 18:58:16.418853  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:16.448494  134705 pod_ready.go:93] pod "kube-proxy-lrr49" in "kube-system" namespace has status "Ready":"True"
	I0829 18:58:16.448616  134705 pod_ready.go:82] duration metric: took 105.381866ms for pod "kube-proxy-lrr49" in "kube-system" namespace to be "Ready" ...
	I0829 18:58:16.448706  134705 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-444829" in "kube-system" namespace to be "Ready" ...
	I0829 18:58:16.644726  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:16.827683  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:16.829691  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:16.852980  134705 pod_ready.go:93] pod "kube-scheduler-addons-444829" in "kube-system" namespace has status "Ready":"True"
	I0829 18:58:16.853014  134705 pod_ready.go:82] duration metric: took 404.266124ms for pod "kube-scheduler-addons-444829" in "kube-system" namespace to be "Ready" ...
	I0829 18:58:16.853030  134705 pod_ready.go:39] duration metric: took 42.2556634s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:58:16.853122  134705 api_server.go:52] waiting for apiserver process to appear ...
	I0829 18:58:16.853343  134705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:58:16.901855  134705 api_server.go:72] duration metric: took 47.896136684s to wait for apiserver process to appear ...
	I0829 18:58:16.901926  134705 api_server.go:88] waiting for apiserver healthz status ...
	I0829 18:58:16.902011  134705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0829 18:58:16.918104  134705 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0829 18:58:16.921624  134705 api_server.go:141] control plane version: v1.31.0
	I0829 18:58:16.921668  134705 api_server.go:131] duration metric: took 19.729485ms to wait for apiserver health ...
	I0829 18:58:16.921683  134705 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 18:58:17.142487  134705 system_pods.go:59] 18 kube-system pods found
	I0829 18:58:17.142620  134705 system_pods.go:61] "coredns-6f6b679f8f-ppj8d" [b690380f-2308-4101-b7c3-1936946ad4b7] Running
	I0829 18:58:17.142700  134705 system_pods.go:61] "csi-hostpath-attacher-0" [07f99d06-2ef1-4fdf-98b7-bee097dbbdd4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0829 18:58:17.142755  134705 system_pods.go:61] "csi-hostpath-resizer-0" [8b6786bf-d9d2-419b-9c31-6ed8b724f00f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0829 18:58:17.142841  134705 system_pods.go:61] "csi-hostpathplugin-x8kst" [a29e161c-c5a8-4333-882c-9a786316db71] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0829 18:58:17.142893  134705 system_pods.go:61] "etcd-addons-444829" [87d3ad0b-d382-4588-9db3-98b48456c29b] Running
	I0829 18:58:17.142923  134705 system_pods.go:61] "kube-apiserver-addons-444829" [7360175c-6922-4a0f-a0a9-c452583867e0] Running
	I0829 18:58:17.142977  134705 system_pods.go:61] "kube-controller-manager-addons-444829" [790164eb-1c23-45e1-ab6d-2e60541976f4] Running
	I0829 18:58:17.143030  134705 system_pods.go:61] "kube-ingress-dns-minikube" [329d18e5-1318-4012-a5da-061980c6c7e0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0829 18:58:17.143064  134705 system_pods.go:61] "kube-proxy-lrr49" [025c8463-cb5d-46db-bbae-468eabc02959] Running
	I0829 18:58:17.143110  134705 system_pods.go:61] "kube-scheduler-addons-444829" [0a50b2c6-8c72-4452-be50-6dc0a280ee80] Running
	I0829 18:58:17.143152  134705 system_pods.go:61] "metrics-server-8988944d9-q8fls" [c596bb1f-a383-47e0-a471-19152ad102bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 18:58:17.143183  134705 system_pods.go:61] "nvidia-device-plugin-daemonset-zvvqm" [6500c674-6c07-4c1a-9b34-9077f15f24df] Running
	I0829 18:58:17.143233  134705 system_pods.go:61] "registry-6fb4cdfc84-fkt2b" [a6d7d0ad-2e5b-410e-b7d4-b63cbe093d11] Running
	I0829 18:58:17.143288  134705 system_pods.go:61] "registry-proxy-7rgrn" [b3ffbbdb-4b08-4fdc-8900-6cf736b4468f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0829 18:58:17.143325  134705 system_pods.go:61] "snapshot-controller-56fcc65765-4nln6" [a575a8a8-2b5d-4708-a58b-745d5076b8be] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:58:17.143388  134705 system_pods.go:61] "snapshot-controller-56fcc65765-6b96g" [603bb1c8-131f-479c-b779-aac9c9db8688] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:58:17.143419  134705 system_pods.go:61] "storage-provisioner" [58910902-8951-4e4f-a115-6f657fa4e615] Running
	I0829 18:58:17.143471  134705 system_pods.go:61] "tiller-deploy-b48cc5f79-lm452" [b5208811-56ec-4d66-b144-d6d7814e857e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0829 18:58:17.143517  134705 system_pods.go:74] duration metric: took 221.823019ms to wait for pod list to return data ...
	I0829 18:58:17.143555  134705 default_sa.go:34] waiting for default service account to be created ...
	I0829 18:58:17.172249  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:17.246199  134705 default_sa.go:45] found service account: "default"
	I0829 18:58:17.246302  134705 default_sa.go:55] duration metric: took 102.692171ms for default service account to be created ...
	I0829 18:58:17.246338  134705 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 18:58:17.320018  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:17.326329  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:17.462893  134705 system_pods.go:86] 18 kube-system pods found
	I0829 18:58:17.462945  134705 system_pods.go:89] "coredns-6f6b679f8f-ppj8d" [b690380f-2308-4101-b7c3-1936946ad4b7] Running
	I0829 18:58:17.462984  134705 system_pods.go:89] "csi-hostpath-attacher-0" [07f99d06-2ef1-4fdf-98b7-bee097dbbdd4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0829 18:58:17.463001  134705 system_pods.go:89] "csi-hostpath-resizer-0" [8b6786bf-d9d2-419b-9c31-6ed8b724f00f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0829 18:58:17.463019  134705 system_pods.go:89] "csi-hostpathplugin-x8kst" [a29e161c-c5a8-4333-882c-9a786316db71] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0829 18:58:17.463031  134705 system_pods.go:89] "etcd-addons-444829" [87d3ad0b-d382-4588-9db3-98b48456c29b] Running
	I0829 18:58:17.463045  134705 system_pods.go:89] "kube-apiserver-addons-444829" [7360175c-6922-4a0f-a0a9-c452583867e0] Running
	I0829 18:58:17.463055  134705 system_pods.go:89] "kube-controller-manager-addons-444829" [790164eb-1c23-45e1-ab6d-2e60541976f4] Running
	I0829 18:58:17.463078  134705 system_pods.go:89] "kube-ingress-dns-minikube" [329d18e5-1318-4012-a5da-061980c6c7e0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0829 18:58:17.463097  134705 system_pods.go:89] "kube-proxy-lrr49" [025c8463-cb5d-46db-bbae-468eabc02959] Running
	I0829 18:58:17.463110  134705 system_pods.go:89] "kube-scheduler-addons-444829" [0a50b2c6-8c72-4452-be50-6dc0a280ee80] Running
	I0829 18:58:17.463126  134705 system_pods.go:89] "metrics-server-8988944d9-q8fls" [c596bb1f-a383-47e0-a471-19152ad102bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 18:58:17.463145  134705 system_pods.go:89] "nvidia-device-plugin-daemonset-zvvqm" [6500c674-6c07-4c1a-9b34-9077f15f24df] Running
	I0829 18:58:17.463156  134705 system_pods.go:89] "registry-6fb4cdfc84-fkt2b" [a6d7d0ad-2e5b-410e-b7d4-b63cbe093d11] Running
	I0829 18:58:17.463175  134705 system_pods.go:89] "registry-proxy-7rgrn" [b3ffbbdb-4b08-4fdc-8900-6cf736b4468f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0829 18:58:17.463196  134705 system_pods.go:89] "snapshot-controller-56fcc65765-4nln6" [a575a8a8-2b5d-4708-a58b-745d5076b8be] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:58:17.463218  134705 system_pods.go:89] "snapshot-controller-56fcc65765-6b96g" [603bb1c8-131f-479c-b779-aac9c9db8688] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:58:17.463228  134705 system_pods.go:89] "storage-provisioner" [58910902-8951-4e4f-a115-6f657fa4e615] Running
	I0829 18:58:17.463248  134705 system_pods.go:89] "tiller-deploy-b48cc5f79-lm452" [b5208811-56ec-4d66-b144-d6d7814e857e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0829 18:58:17.463268  134705 system_pods.go:126] duration metric: took 216.871932ms to wait for k8s-apps to be running ...
	I0829 18:58:17.463288  134705 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 18:58:17.463398  134705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:58:17.492657  134705 system_svc.go:56] duration metric: took 29.356387ms WaitForService to wait for kubelet
	I0829 18:58:17.492703  134705 kubeadm.go:582] duration metric: took 48.486988388s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:58:17.492739  134705 node_conditions.go:102] verifying NodePressure condition ...
	I0829 18:58:17.643841  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:17.655523  134705 node_conditions.go:122] node storage ephemeral capacity is 119475748Ki
	I0829 18:58:17.655572  134705 node_conditions.go:123] node cpu capacity is 2
	I0829 18:58:17.655596  134705 node_conditions.go:105] duration metric: took 162.846249ms to run NodePressure ...
	I0829 18:58:17.655618  134705 start.go:241] waiting for startup goroutines ...
	I0829 18:58:17.815305  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:17.823621  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:18.144886  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:18.318358  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:18.321810  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:18.744511  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:18.815319  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:18.818514  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:19.244519  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:19.460535  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:19.464014  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:19.796803  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:19.815925  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:19.820260  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:20.144969  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:20.357081  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:20.363659  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:20.646941  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:20.822738  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:20.824030  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:21.162937  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:21.330297  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:21.340505  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:21.695768  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:21.817526  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:21.825674  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:22.172218  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:22.382496  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:22.384706  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:22.932371  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:22.932414  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:22.934506  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:23.143208  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:23.349313  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:23.361213  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:23.662404  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:23.820767  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:23.822121  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:24.143367  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:24.316402  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:24.318054  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:24.651729  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:24.821343  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:24.835104  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:25.145898  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:25.339379  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:25.346609  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:25.755242  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:25.852623  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:25.854322  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:26.156404  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:26.314484  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:26.318114  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:26.668374  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:26.872382  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:26.874682  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:27.165904  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:27.364568  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:27.367275  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:27.685110  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:27.901665  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:27.904743  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:28.154142  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:28.339546  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:28.346208  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:28.653982  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:28.820980  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:28.825462  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:29.148802  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:29.316632  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:29.322256  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:29.643454  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:29.823608  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:29.831003  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:30.144493  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:30.328899  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:30.337461  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:30.652782  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:30.830594  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:30.839193  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:31.155833  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:31.332503  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:31.356089  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:31.651147  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:31.846102  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:31.849632  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:32.163971  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:32.325934  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:32.329324  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:32.652521  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:32.905692  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:32.909508  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:33.302945  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:33.607972  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:33.717924  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:33.728010  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:33.837988  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:33.839805  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:34.165425  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:34.321758  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:34.323653  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:34.659001  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:34.825366  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:58:34.826741  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:35.142621  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:35.321068  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:35.333404  134705 kapi.go:107] duration metric: took 38.521536953s to wait for kubernetes.io/minikube-addons=registry ...
	I0829 18:58:35.646398  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:35.821838  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:36.150838  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:36.339409  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:36.670510  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:36.833340  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:37.154032  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:37.322940  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:37.658662  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:37.832942  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:38.175800  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:38.318698  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:38.644305  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:38.825233  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:39.147029  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:39.315670  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:39.644028  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:40.014341  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:40.142756  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:40.450236  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:40.760939  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:40.817372  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:41.142807  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:41.328622  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:41.649882  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:41.818817  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:42.173071  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:42.316567  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:42.644921  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:42.825167  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:43.150176  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:43.318092  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:43.643707  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:43.820892  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:44.163656  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:44.321591  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:44.679156  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:44.958894  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:45.144887  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:45.316231  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:45.648137  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:45.838272  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:46.170513  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:46.317426  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:46.643234  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:46.854636  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:47.181471  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:47.323823  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:47.660717  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:48.049564  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:48.141790  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:48.315208  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:48.654078  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:48.818283  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:49.157057  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:49.316501  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:49.642973  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:49.817246  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:50.142895  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:50.323809  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:50.646495  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:50.820465  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:51.142968  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:51.324275  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:51.648981  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:51.860387  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:52.150841  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:52.331153  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:52.656490  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:52.827055  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:53.218320  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:53.329393  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:53.661430  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:53.917267  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:54.157824  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:54.334414  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:54.660572  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:54.820889  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:55.152057  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:55.363455  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:55.741257  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:55.815596  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:56.143332  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:56.583558  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:56.756415  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:57.004326  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:57.271909  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:57.319322  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:57.789806  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:57.883475  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:58.248279  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:58.336794  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:58.649300  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:58.857339  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:59.174505  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:59.319484  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:58:59.657235  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:58:59.827856  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:00.147841  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:00.324789  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:00.646785  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:00.826833  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:01.153575  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:01.393314  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:01.645642  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:01.815530  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:02.147687  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:02.346052  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:02.652546  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:02.815498  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:03.168519  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:03.319806  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:03.646369  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:03.816778  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:04.169744  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:04.321005  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:04.647451  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:04.870076  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:05.219269  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:05.350059  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:05.658739  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:05.818189  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:06.143874  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:06.324844  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:06.645108  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:06.889309  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:07.277609  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:07.425407  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:07.798232  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:07.898730  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:08.150944  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:08.326651  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:08.749078  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:08.825405  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:09.152755  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:09.350022  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:09.645100  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:09.818542  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:10.165241  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:10.362768  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:10.641939  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:11.069370  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:11.142281  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:11.317416  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:11.656104  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:11.821158  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:12.176227  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:12.321792  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:12.642879  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:12.816393  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:13.154305  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:13.316093  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:13.648419  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:13.841749  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:14.142525  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:14.413736  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:14.664360  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:14.817791  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:15.505721  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:15.507703  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:15.654282  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:15.839586  134705 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:59:16.146085  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:16.342731  134705 kapi.go:107] duration metric: took 1m19.534503268s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0829 18:59:16.698939  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:17.146113  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:17.676577  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:18.242739  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:18.663274  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:19.154886  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:19.646966  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:20.145621  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:20.647615  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:21.152293  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:21.663715  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:22.224675  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:22.647594  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:23.146983  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:23.649202  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:24.143078  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:24.642409  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:59:25.142523  134705 kapi.go:107] duration metric: took 1m25.00774234s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0829 18:59:25.614024  134705 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 18:59:25.614147  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:26.113856  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:26.614156  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:27.114098  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:27.615326  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:28.133417  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:28.613453  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:29.115370  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:29.613554  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:30.113435  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:30.612756  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:31.113198  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:31.612942  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:32.116379  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:32.616042  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:33.114890  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:33.614902  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:34.113149  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:34.613413  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:35.114106  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:35.613357  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:36.112518  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:36.613335  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:37.113195  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:37.612826  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:38.112630  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:38.613203  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:39.112359  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:39.613408  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:40.113618  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:40.613615  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:41.113082  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:41.613048  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:42.112634  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:42.614038  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:43.114144  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:43.613922  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:44.114710  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:44.613086  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:45.113703  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:45.612892  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:46.113444  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:46.613770  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:47.113080  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:47.613011  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:48.113821  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:48.616388  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:49.112852  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:49.613442  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:50.113733  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:50.613429  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:51.112562  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:51.612805  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:52.112841  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:52.613395  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:53.113013  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:53.612351  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:54.113004  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:54.615136  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:55.116101  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:55.613729  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:56.112348  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:56.614653  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:57.113919  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:57.615320  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:58.115668  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:58.626612  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:59.111907  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:59.613128  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:00.113256  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:00.613339  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:01.113146  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:01.613331  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:02.112633  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:02.613125  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:03.112727  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:03.614083  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:04.117941  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:04.613114  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:05.113622  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:05.616083  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:06.118495  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:06.617666  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:07.124049  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:07.625868  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:08.113166  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:08.613794  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:09.114136  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:09.617244  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:10.113666  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:10.615084  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:11.113256  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:11.612359  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:12.115309  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:12.639321  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:13.113292  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:13.612316  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:14.113069  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:14.612614  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:15.113680  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:15.612973  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:16.113928  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:16.613063  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:17.113579  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:17.611785  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:18.115063  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:18.612504  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:19.114582  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:19.613412  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:20.114259  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:20.617051  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:21.112693  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:21.612442  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:22.123495  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:22.612558  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:23.114357  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:23.614567  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:24.113825  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:24.612494  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:25.113304  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:25.622178  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:26.114285  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:26.612259  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:27.113402  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:27.613038  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:28.113456  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:28.612907  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:29.114193  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:29.614792  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:30.130089  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:30.613492  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:31.121128  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:31.616968  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:32.116237  134705 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 19:00:32.617586  134705 kapi.go:107] duration metric: took 2m29.510060994s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0829 19:00:32.622383  134705 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-444829 cluster.
	I0829 19:00:32.626513  134705 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0829 19:00:32.630914  134705 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0829 19:00:32.634657  134705 out.go:177] * Enabled addons: volcano, ingress-dns, nvidia-device-plugin, storage-provisioner, cloud-spanner, helm-tiller, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0829 19:00:32.638480  134705 addons.go:510] duration metric: took 3m3.631950318s for enable addons: enabled=[volcano ingress-dns nvidia-device-plugin storage-provisioner cloud-spanner helm-tiller inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0829 19:00:32.638655  134705 start.go:246] waiting for cluster config update ...
	I0829 19:00:32.638738  134705 start.go:255] writing updated cluster config ...
	I0829 19:00:32.639526  134705 ssh_runner.go:195] Run: rm -f paused
	I0829 19:00:33.166412  134705 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:00:33.170720  134705 out.go:177] * Done! kubectl is now configured to use "addons-444829" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 29 19:10:09 addons-444829 dockerd[1161]: time="2024-08-29T19:10:09.736193096Z" level=error msg="stream copy error: reading from a closed fifo"
	Aug 29 19:10:09 addons-444829 dockerd[1161]: time="2024-08-29T19:10:09.740604661Z" level=error msg="Error running exec 1c991a0a6cdb1d0e6d1f043ca78f2bc01780bb32824e3e5a588614d17db6a020 in container: OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown"
	Aug 29 19:10:09 addons-444829 dockerd[1161]: time="2024-08-29T19:10:09.761173673Z" level=info msg="ignoring event" container=0b3ea191d99d8b3ad81b65e824c2acaa834c2238d9993a67b31827ba64f1ddc5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:10:15 addons-444829 dockerd[1161]: time="2024-08-29T19:10:15.380341203Z" level=info msg="ignoring event" container=8216fc4da60a3d5ca7e184874784842cb9356608470ce3faef76a0354aa79c91 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:10:15 addons-444829 dockerd[1161]: time="2024-08-29T19:10:15.474863079Z" level=info msg="ignoring event" container=a41852e18cb6d4e812cbf053800e765d21c8a4ab11e5adefc8b92d112222559b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:10:15 addons-444829 dockerd[1161]: time="2024-08-29T19:10:15.733071617Z" level=info msg="ignoring event" container=3ecb18f53ded2e276e72a3a6a8b06fbee73c8a5d3f0aaa6bef20647f76735b64 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:10:15 addons-444829 cri-dockerd[1417]: time="2024-08-29T19:10:15Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"snapshot-controller-56fcc65765-6b96g_kube-system\": unexpected command output nsenter: cannot open /proc/4876/ns/net: No such file or directory\n with error: exit status 1"
	Aug 29 19:10:15 addons-444829 dockerd[1161]: time="2024-08-29T19:10:15.902168726Z" level=info msg="ignoring event" container=c430abc4132cedbe2efd5e99bd47f944bd4440a1081e33035f49fd1dfd6112b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:10:16 addons-444829 dockerd[1161]: time="2024-08-29T19:10:16.243453425Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 29 19:10:16 addons-444829 dockerd[1161]: time="2024-08-29T19:10:16.246298574Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 29 19:10:22 addons-444829 cri-dockerd[1417]: time="2024-08-29T19:10:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/72f136fd32d85959aecd12d5d2160d754b771cfb29b7ded31ed784153b2aa030/resolv.conf as [nameserver 10.96.0.10 search kube-system.svc.cluster.local svc.cluster.local cluster.local us-east1-b.c.p79a29526b6c1e63c-tp.internal c.p79a29526b6c1e63c-tp.internal google.internal options ndots:5]"
	Aug 29 19:10:24 addons-444829 cri-dockerd[1417]: time="2024-08-29T19:10:24Z" level=info msg="Stop pulling image docker.io/alpine/helm:2.16.3: Status: Downloaded newer image for alpine/helm:2.16.3"
	Aug 29 19:10:25 addons-444829 dockerd[1161]: time="2024-08-29T19:10:25.015568667Z" level=info msg="ignoring event" container=29b199bda7907f106b3c452474d458482534994bca2eaa8ef0cd5c0b2e9100e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:10:25 addons-444829 dockerd[1161]: time="2024-08-29T19:10:25.053106612Z" level=warning msg="failed to close stdin: NotFound: task 29b199bda7907f106b3c452474d458482534994bca2eaa8ef0cd5c0b2e9100e6 not found: not found"
	Aug 29 19:10:26 addons-444829 dockerd[1161]: time="2024-08-29T19:10:26.405807043Z" level=info msg="ignoring event" container=72f136fd32d85959aecd12d5d2160d754b771cfb29b7ded31ed784153b2aa030 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:10:27 addons-444829 dockerd[1161]: time="2024-08-29T19:10:27.489357486Z" level=info msg="ignoring event" container=7c6ca26b1c308bf476d9e52293509e4c0578cad3307d360b1d20592085e4f630 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:10:27 addons-444829 dockerd[1161]: time="2024-08-29T19:10:27.752445873Z" level=info msg="ignoring event" container=10a5b2ab1f7cf58b724f5893fde7651ee932349104921e65de17b78eb2a235a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:10:35 addons-444829 dockerd[1161]: time="2024-08-29T19:10:35.543452689Z" level=info msg="ignoring event" container=27f7ff06793f50b9b32cc1226abda94cb2400a0f13bb35b7adecab770141a76b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:10:35 addons-444829 dockerd[1161]: time="2024-08-29T19:10:35.744471702Z" level=info msg="ignoring event" container=2d6a726e051d4b5c28f08eb097db87bd4bc2e4a808aa78481cb3f7bbd7c14455 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:10:38 addons-444829 dockerd[1161]: time="2024-08-29T19:10:38.262394436Z" level=info msg="ignoring event" container=1ebdc43a673c36161783b4ac7a2178cb3ab6f9e07b4c804b387a1c1337b3e3f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:10:39 addons-444829 dockerd[1161]: time="2024-08-29T19:10:39.388807385Z" level=info msg="ignoring event" container=52f047f105547993eb7420020a43443273e0ffc76c47e32533d5354dfeb55d23 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:10:39 addons-444829 dockerd[1161]: time="2024-08-29T19:10:39.606222452Z" level=info msg="ignoring event" container=683532e1fa1534d345e7fc719e325350d59557ef96abb750a951b665da97b48c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:10:39 addons-444829 dockerd[1161]: time="2024-08-29T19:10:39.765042649Z" level=info msg="ignoring event" container=c41ba95394e06a7039c676e4467ed60a0a669ba0df36e21c98728a0380377ef9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:10:40 addons-444829 dockerd[1161]: time="2024-08-29T19:10:40.201037013Z" level=info msg="ignoring event" container=722752de32349056d97f2e70e68ac57d94dc519ba89aa15e509848de0a986efb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:10:41 addons-444829 dockerd[1161]: time="2024-08-29T19:10:41.300503452Z" level=info msg="ignoring event" container=94c1ad6c17b5c96b2eedd07e5df92e6282d3d09a37c8c10839b416f4b7b2a00e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	0b3ea191d99d8       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc            35 seconds ago      Exited              gadget                     7                   94c1ad6c17b5c       gadget-b66tg
	cf944e83ec9f2       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago      Running             gcp-auth                   0                   948097f90a5d4       gcp-auth-89d5ffd79-n8ddt
	b91d19c6f93e0       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                 0                   a50231aa7ad83       ingress-nginx-controller-bc57996ff-mscmm
	d516366ef3a56       ce263a8653f9c                                                                                                                11 minutes ago      Exited              patch                      1                   670ad9a3e5d1c       ingress-nginx-admission-patch-vgftr
	8726ead54d2e1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                     0                   cb8f3c848ad39       ingress-nginx-admission-create-jqvkr
	85cd11506c386       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        12 minutes ago      Running             yakd                       0                   5aac6a18b2cc3       yakd-dashboard-67d98fc6b-sz9hc
	056ede8bac531       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner     0                   5a75976294766       local-path-provisioner-86d989889c-ljzqg
	bd3b0fdf91be5       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns       0                   ae302961c8142       kube-ingress-dns-minikube
	a7e801958c6b4       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   fffafe402569a       nvidia-device-plugin-daemonset-zvvqm
	15c8d0a05d880       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator     0                   7f883831aecce       cloud-spanner-emulator-769b77f747-8n98q
	8ff4272abf398       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner        0                   b6f3564277830       storage-provisioner
	c935af55eac31       cbb01a7bd410d                                                                                                                13 minutes ago      Running             coredns                    0                   be9d07f2d6530       coredns-6f6b679f8f-ppj8d
	d8f73c8a85421       ad83b2ca7b09e                                                                                                                13 minutes ago      Running             kube-proxy                 0                   a90c343566225       kube-proxy-lrr49
	714be92672c1c       045733566833c                                                                                                                13 minutes ago      Running             kube-controller-manager    0                   76b7dd313fce2       kube-controller-manager-addons-444829
	a151007558d9c       2e96e5913fc06                                                                                                                13 minutes ago      Running             etcd                       0                   cef05e6059f5d       etcd-addons-444829
	c50767d3f334f       1766f54c897f0                                                                                                                13 minutes ago      Running             kube-scheduler             0                   9f7e5cf797df0       kube-scheduler-addons-444829
	9a6cad5c1df59       604f5db92eaa8                                                                                                                13 minutes ago      Running             kube-apiserver             0                   269d7df880ffc       kube-apiserver-addons-444829
	
	
	==> controller_ingress [b91d19c6f93e] <==
	W0829 18:59:15.699630       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0829 18:59:15.699982       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0829 18:59:15.708364       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.0" state="clean" commit="9edcffcde5595e8a5b1a35f88c421764e575afce" platform="linux/amd64"
	I0829 18:59:16.459232       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0829 18:59:16.494847       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0829 18:59:16.547463       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0829 18:59:16.618580       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"3b4ea99c-24e1-4610-a25b-850b3bd1de92", APIVersion:"v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0829 18:59:16.689174       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"f1a5f30d-454f-4466-8844-1fb0e148fac6", APIVersion:"v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0829 18:59:16.689281       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"9b8cf119-e40b-404c-ab0f-a4d539d40607", APIVersion:"v1", ResourceVersion:"727", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0829 18:59:17.751320       7 nginx.go:317] "Starting NGINX process"
	I0829 18:59:17.752932       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0829 18:59:17.756365       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0829 18:59:17.762257       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0829 18:59:17.866487       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0829 18:59:17.876568       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-mscmm"
	I0829 18:59:17.970521       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-mscmm" node="addons-444829"
	I0829 18:59:18.106748       7 controller.go:213] "Backend successfully reloaded"
	I0829 18:59:18.107043       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0829 18:59:18.107606       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-mscmm", UID:"a1e8765f-bb19-4d9d-9092-014559e76d36", APIVersion:"v1", ResourceVersion:"1292", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [c935af55eac3] <==
	[INFO] 10.244.0.9:42736 - 26925 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000277476s
	[INFO] 10.244.0.9:41093 - 39094 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000097835s
	[INFO] 10.244.0.9:41093 - 56763 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000113181s
	[INFO] 10.244.0.9:43352 - 3441 "AAAA IN registry.kube-system.svc.cluster.local.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 99 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000110827s
	[INFO] 10.244.0.9:43352 - 64893 "A IN registry.kube-system.svc.cluster.local.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 99 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000195043s
	[INFO] 10.244.0.9:41459 - 63506 "A IN registry.kube-system.svc.cluster.local.c.p79a29526b6c1e63c-tp.internal. udp 88 false 512" NXDOMAIN qr,aa,rd,ra 193 0.00010789s
	[INFO] 10.244.0.9:41459 - 62486 "AAAA IN registry.kube-system.svc.cluster.local.c.p79a29526b6c1e63c-tp.internal. udp 88 false 512" NXDOMAIN qr,aa,rd,ra 193 0.000308133s
	[INFO] 10.244.0.9:35797 - 47674 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000101337s
	[INFO] 10.244.0.9:35797 - 38462 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000072018s
	[INFO] 10.244.0.9:60557 - 57850 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000140282s
	[INFO] 10.244.0.9:60557 - 62207 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000241462s
	[INFO] 10.244.0.26:42434 - 57044 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000594259s
	[INFO] 10.244.0.26:54141 - 62653 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000300597s
	[INFO] 10.244.0.26:36752 - 7715 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000363311s
	[INFO] 10.244.0.26:37060 - 49215 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00071733s
	[INFO] 10.244.0.26:39606 - 17720 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00018349s
	[INFO] 10.244.0.26:47500 - 25186 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000170153s
	[INFO] 10.244.0.26:37974 - 33807 "A IN storage.googleapis.com.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 190 0.004439306s
	[INFO] 10.244.0.26:45332 - 32830 "AAAA IN storage.googleapis.com.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 190 0.005966426s
	[INFO] 10.244.0.26:56225 - 17625 "AAAA IN storage.googleapis.com.c.p79a29526b6c1e63c-tp.internal. udp 83 false 1232" NXDOMAIN qr,rd,ra 177 0.004193916s
	[INFO] 10.244.0.26:53258 - 60211 "A IN storage.googleapis.com.c.p79a29526b6c1e63c-tp.internal. udp 83 false 1232" NXDOMAIN qr,rd,ra 177 0.003636883s
	[INFO] 10.244.0.26:56534 - 48886 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003424779s
	[INFO] 10.244.0.26:37222 - 27371 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006453327s
	[INFO] 10.244.0.26:41557 - 4835 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.003235755s
	[INFO] 10.244.0.26:53120 - 3620 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003326736s
	
	
	==> describe nodes <==
	Name:               addons-444829
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-444829
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=addons-444829
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T18_57_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-444829
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:57:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-444829
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:10:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:10:34 +0000   Thu, 29 Aug 2024 18:57:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:10:34 +0000   Thu, 29 Aug 2024 18:57:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:10:34 +0000   Thu, 29 Aug 2024 18:57:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:10:34 +0000   Thu, 29 Aug 2024 18:57:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-444829
	Capacity:
	  cpu:                2
	  ephemeral-storage:  119475748Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             8141780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  119475748Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             8141780Ki
	  pods:               110
	System Info:
	  Machine ID:                 c559be441842416baa387f6a8cefe148
	  System UUID:                70289302-dbde-4fe4-baa8-8a37a8b254fe
	  Boot ID:                    792db706-7214-421d-bb31-a209937443ca
	  Kernel Version:             6.1.100+
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m21s
	  default                     cloud-spanner-emulator-769b77f747-8n98q     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  gcp-auth                    gcp-auth-89d5ffd79-n8ddt                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-mscmm    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-6f6b679f8f-ppj8d                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-addons-444829                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-444829                250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-444829       200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-lrr49                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-444829                100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 nvidia-device-plugin-daemonset-zvvqm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-ljzqg     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-sz9hc              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             388Mi (4%)  426Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-444829 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node addons-444829 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x8 over 13m)  kubelet          Node addons-444829 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node addons-444829 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node addons-444829 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node addons-444829 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node addons-444829 event: Registered Node addons-444829 in Controller
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 72 c6 ab 70 86 9d 08 06
	[  +0.185308] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 69 30 0e 70 b8 08 06
	[  +2.080977] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 96 52 3b 62 b3 b8 08 06
	[Aug29 18:59] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 72 2e 9a 2e f8 35 08 06
	[  +8.400675] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9e b3 79 5f 91 09 08 06
	[  +3.512170] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 58 60 71 61 66 08 06
	[  +0.602172] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 8a ae f0 3d ed 86 08 06
	[  +0.276427] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 51 e3 3c aa 97 08 06
	[Aug29 19:00] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 75 0f ba 55 c6 08 06
	[  +0.248840] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9e cc 63 b3 20 c5 08 06
	[ +24.532299] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ea f0 f2 ad 20 fa 08 06
	[  +0.000939] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 b1 42 b8 7b f0 08 06
	[Aug29 19:10] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 ab 4d d6 2e 8d 08 06
	
	
	==> etcd [a151007558d9] <==
	{"level":"warn","ts":"2024-08-29T19:00:59.107373Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.241288ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T19:00:59.107468Z","caller":"traceutil/trace.go:171","msg":"trace[1030034939] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1595; }","duration":"152.352271ms","start":"2024-08-29T19:00:58.955096Z","end":"2024-08-29T19:00:59.107449Z","steps":["trace[1030034939] 'range keys from in-memory index tree'  (duration: 152.153188ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T19:01:04.005141Z","caller":"traceutil/trace.go:171","msg":"trace[124891096] linearizableReadLoop","detail":"{readStateIndex:1665; appliedIndex:1664; }","duration":"123.342498ms","start":"2024-08-29T19:01:03.881774Z","end":"2024-08-29T19:01:04.005117Z","steps":["trace[124891096] 'read index received'  (duration: 123.147938ms)","trace[124891096] 'applied index is now lower than readState.Index'  (duration: 193.592µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-29T19:01:04.005635Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.846229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-08-29T19:01:04.005681Z","caller":"traceutil/trace.go:171","msg":"trace[1600325626] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1603; }","duration":"123.906746ms","start":"2024-08-29T19:01:03.881761Z","end":"2024-08-29T19:01:04.005667Z","steps":["trace[1600325626] 'agreement among raft nodes before linearized reading'  (duration: 123.732263ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T19:01:04.177509Z","caller":"traceutil/trace.go:171","msg":"trace[182694419] transaction","detail":"{read_only:false; response_revision:1605; number_of_response:1; }","duration":"164.123341ms","start":"2024-08-29T19:01:04.013364Z","end":"2024-08-29T19:01:04.177487Z","steps":["trace[182694419] 'process raft request'  (duration: 164.054249ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T19:01:04.177742Z","caller":"traceutil/trace.go:171","msg":"trace[1415170789] transaction","detail":"{read_only:false; response_revision:1604; number_of_response:1; }","duration":"170.441671ms","start":"2024-08-29T19:01:04.007287Z","end":"2024-08-29T19:01:04.177729Z","steps":["trace[1415170789] 'process raft request'  (duration: 125.945863ms)","trace[1415170789] 'compare'  (duration: 44.059264ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T19:07:19.387994Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1874}
	{"level":"info","ts":"2024-08-29T19:07:19.424070Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1874,"took":"34.963457ms","hash":2720505529,"current-db-size-bytes":9023488,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":5021696,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-08-29T19:07:19.424140Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2720505529,"revision":1874,"compact-revision":-1}
	{"level":"warn","ts":"2024-08-29T19:10:08.088517Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"350.668402ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128031541726899317 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/csi-hostpath-resizer\" mod_revision:967 > success:<request_delete_range:<key:\"/registry/services/endpoints/kube-system/csi-hostpath-resizer\" > > failure:<request_range:<key:\"/registry/services/endpoints/kube-system/csi-hostpath-resizer\" > >>","response":"size:18"}
	{"level":"info","ts":"2024-08-29T19:10:08.089739Z","caller":"traceutil/trace.go:171","msg":"trace[356436880] linearizableReadLoop","detail":"{readStateIndex:2854; appliedIndex:2852; }","duration":"506.079116ms","start":"2024-08-29T19:10:07.583499Z","end":"2024-08-29T19:10:08.089578Z","steps":["trace[356436880] 'read index received'  (duration: 4.154946ms)","trace[356436880] 'applied index is now lower than readState.Index'  (duration: 501.922597ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T19:10:08.115369Z","caller":"traceutil/trace.go:171","msg":"trace[1508818076] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2674; }","duration":"531.112652ms","start":"2024-08-29T19:10:07.583556Z","end":"2024-08-29T19:10:08.114668Z","steps":["trace[1508818076] 'process raft request'  (duration: 505.673789ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T19:10:08.115778Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T19:10:07.583548Z","time spent":"532.047433ms","remote":"127.0.0.1:40058","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":65,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/csi-hostpath-resizer\" mod_revision:967 > success:<request_delete_range:<key:\"/registry/services/endpoints/kube-system/csi-hostpath-resizer\" > > failure:<request_range:<key:\"/registry/services/endpoints/kube-system/csi-hostpath-resizer\" > >"}
	{"level":"info","ts":"2024-08-29T19:10:08.116914Z","caller":"traceutil/trace.go:171","msg":"trace[499233089] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2674; }","duration":"533.49713ms","start":"2024-08-29T19:10:07.583396Z","end":"2024-08-29T19:10:08.116893Z","steps":["trace[499233089] 'process raft request'  (duration: 150.932878ms)","trace[499233089] 'compare'  (duration: 115.233391ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-29T19:10:08.117281Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T19:10:07.583361Z","time spent":"533.750281ms","remote":"127.0.0.1:40058","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":65,"response count":0,"response size":42,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/csi-hostpath-resizer\" mod_revision:967 > success:<request_delete_range:<key:\"/registry/services/endpoints/kube-system/csi-hostpath-resizer\" > > failure:<request_range:<key:\"/registry/services/endpoints/kube-system/csi-hostpath-resizer\" > >"}
	{"level":"warn","ts":"2024-08-29T19:10:08.136608Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"553.087229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/kube-system/csi-hostpath-resizer-s4hmr\" ","response":"range_response_count:1 size:1076"}
	{"level":"info","ts":"2024-08-29T19:10:08.137171Z","caller":"traceutil/trace.go:171","msg":"trace[297885273] range","detail":"{range_begin:/registry/endpointslices/kube-system/csi-hostpath-resizer-s4hmr; range_end:; response_count:1; response_revision:2674; }","duration":"553.647222ms","start":"2024-08-29T19:10:07.583493Z","end":"2024-08-29T19:10:08.137141Z","steps":["trace[297885273] 'agreement among raft nodes before linearized reading'  (duration: 535.996287ms)","trace[297885273] 'range keys from in-memory index tree'  (duration: 17.009976ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-29T19:10:08.137288Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T19:10:07.583462Z","time spent":"553.802714ms","remote":"127.0.0.1:40158","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":1,"response size":1100,"request content":"key:\"/registry/endpointslices/kube-system/csi-hostpath-resizer-s4hmr\" "}
	{"level":"info","ts":"2024-08-29T19:10:08.137598Z","caller":"traceutil/trace.go:171","msg":"trace[792745607] transaction","detail":"{read_only:false; response_revision:2675; number_of_response:1; }","duration":"234.792768ms","start":"2024-08-29T19:10:07.902790Z","end":"2024-08-29T19:10:08.137583Z","steps":["trace[792745607] 'process raft request'  (duration: 211.785848ms)","trace[792745607] 'compare'  (duration: 21.836166ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-29T19:10:08.138635Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"337.847588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-08-29T19:10:08.139803Z","caller":"traceutil/trace.go:171","msg":"trace[614484259] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2675; }","duration":"339.026624ms","start":"2024-08-29T19:10:07.800761Z","end":"2024-08-29T19:10:08.139787Z","steps":["trace[614484259] 'agreement among raft nodes before linearized reading'  (duration: 337.790441ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T19:10:08.139138Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"533.30623ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T19:10:08.145464Z","caller":"traceutil/trace.go:171","msg":"trace[482969201] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2675; }","duration":"539.627288ms","start":"2024-08-29T19:10:07.605812Z","end":"2024-08-29T19:10:08.145440Z","steps":["trace[482969201] 'agreement among raft nodes before linearized reading'  (duration: 533.285257ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T19:10:08.146049Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T19:10:07.800672Z","time spent":"345.253011ms","remote":"127.0.0.1:40058","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1138,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	
	
	==> gcp-auth [cf944e83ec9f] <==
	2024/08/29 19:00:32 GCP Auth Webhook started!
	2024/08/29 19:00:51 Ready to marshal response ...
	2024/08/29 19:00:51 Ready to write response ...
	2024/08/29 19:00:53 Ready to marshal response ...
	2024/08/29 19:00:53 Ready to write response ...
	2024/08/29 19:01:21 Ready to marshal response ...
	2024/08/29 19:01:21 Ready to write response ...
	2024/08/29 19:01:21 Ready to marshal response ...
	2024/08/29 19:01:21 Ready to write response ...
	2024/08/29 19:01:21 Ready to marshal response ...
	2024/08/29 19:01:21 Ready to write response ...
	2024/08/29 19:09:32 Ready to marshal response ...
	2024/08/29 19:09:32 Ready to write response ...
	2024/08/29 19:09:37 Ready to marshal response ...
	2024/08/29 19:09:37 Ready to write response ...
	2024/08/29 19:09:56 Ready to marshal response ...
	2024/08/29 19:09:56 Ready to write response ...
	2024/08/29 19:10:21 Ready to marshal response ...
	2024/08/29 19:10:21 Ready to write response ...
	
	
	==> kernel <==
	 19:10:42 up  2:54,  0 users,  load average: 2.16, 1.41, 1.35
	Linux addons-444829 6.1.100+ #1 SMP PREEMPT_DYNAMIC Sat Aug 17 14:12:26 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [9a6cad5c1df5] <==
	W0829 19:01:14.291225       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0829 19:01:14.291251       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0829 19:01:14.311379       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0829 19:01:14.358601       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0829 19:01:14.630206       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0829 19:01:15.142037       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0829 19:09:40.515301       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0829 19:10:15.046171       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:10:15.046686       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 19:10:15.095783       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:10:15.097032       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 19:10:15.116606       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:10:15.117282       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 19:10:15.140282       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:10:15.140364       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 19:10:15.292507       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:10:15.292627       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0829 19:10:16.116539       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0829 19:10:16.292942       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0829 19:10:16.392271       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0829 19:10:24.985990       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.32:43116: read: connection reset by peer
	E0829 19:10:26.497930       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0829 19:10:35.016229       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0829 19:10:41.055563       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0829 19:10:42.162695       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [714be92672c1] <==
	W0829 19:10:24.473435       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:10:24.473815       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:10:25.360623       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:10:25.360895       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 19:10:27.340863       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="24.1µs"
	W0829 19:10:28.408767       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:10:28.408918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 19:10:32.496006       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0829 19:10:32.496071       1 shared_informer.go:320] Caches are synced for resource quota
	I0829 19:10:32.518144       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0829 19:10:32.518267       1 shared_informer.go:320] Caches are synced for garbage collector
	W0829 19:10:32.803549       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:10:32.803624       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 19:10:34.389393       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="7.72µs"
	W0829 19:10:34.629451       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:10:34.629721       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 19:10:34.646180       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-444829"
	W0829 19:10:36.506938       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:10:36.507054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:10:38.077695       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:10:38.079347       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 19:10:39.280762       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="33.941µs"
	W0829 19:10:39.767492       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:10:39.767559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0829 19:10:42.166114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [d8f73c8a8542] <==
	I0829 18:57:38.236690       1 server_linux.go:66] "Using iptables proxy"
	I0829 18:57:40.860425       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0829 18:57:40.887272       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:57:41.575639       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0829 18:57:41.575769       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:57:41.580434       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:57:41.581105       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:57:41.581214       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:57:41.601197       1 config.go:197] "Starting service config controller"
	I0829 18:57:41.601424       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:57:41.601616       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:57:41.601714       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:57:41.602926       1 config.go:326] "Starting node config controller"
	I0829 18:57:41.603042       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:57:41.904035       1 shared_informer.go:320] Caches are synced for node config
	I0829 18:57:41.904228       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:57:41.904253       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c50767d3f334] <==
	W0829 18:57:22.026851       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 18:57:22.027246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0829 18:57:22.844744       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 18:57:22.845102       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:57:22.872288       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 18:57:22.872726       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:57:23.013430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0829 18:57:23.014030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:57:23.018113       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 18:57:23.019067       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:57:23.044407       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 18:57:23.044473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:57:23.150015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 18:57:23.150414       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:57:23.230146       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 18:57:23.230508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:57:23.260178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 18:57:23.260236       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:57:23.323259       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 18:57:23.323706       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0829 18:57:23.345232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 18:57:23.345528       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:57:23.358879       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 18:57:23.359148       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0829 18:57:25.709361       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 19:10:41 addons-444829 kubelet[2182]: E0829 19:10:41.075541    2182 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 683532e1fa1534d345e7fc719e325350d59557ef96abb750a951b665da97b48c" containerID="683532e1fa1534d345e7fc719e325350d59557ef96abb750a951b665da97b48c"
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.075646    2182 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"683532e1fa1534d345e7fc719e325350d59557ef96abb750a951b665da97b48c"} err="failed to get container status \"683532e1fa1534d345e7fc719e325350d59557ef96abb750a951b665da97b48c\": rpc error: code = Unknown desc = Error response from daemon: No such container: 683532e1fa1534d345e7fc719e325350d59557ef96abb750a951b665da97b48c"
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.235207    2182 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6d7d0ad-2e5b-410e-b7d4-b63cbe093d11" path="/var/lib/kubelet/pods/a6d7d0ad-2e5b-410e-b7d4-b63cbe093d11/volumes"
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.506444    2182 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/1d94ac99-7c2c-4f2f-9789-fcf2365cadb2-modules\") pod \"1d94ac99-7c2c-4f2f-9789-fcf2365cadb2\" (UID: \"1d94ac99-7c2c-4f2f-9789-fcf2365cadb2\") "
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.506533    2182 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7d4m\" (UniqueName: \"kubernetes.io/projected/1d94ac99-7c2c-4f2f-9789-fcf2365cadb2-kube-api-access-h7d4m\") pod \"1d94ac99-7c2c-4f2f-9789-fcf2365cadb2\" (UID: \"1d94ac99-7c2c-4f2f-9789-fcf2365cadb2\") "
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.506570    2182 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/1d94ac99-7c2c-4f2f-9789-fcf2365cadb2-bpffs\") pod \"1d94ac99-7c2c-4f2f-9789-fcf2365cadb2\" (UID: \"1d94ac99-7c2c-4f2f-9789-fcf2365cadb2\") "
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.506623    2182 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d94ac99-7c2c-4f2f-9789-fcf2365cadb2-host\") pod \"1d94ac99-7c2c-4f2f-9789-fcf2365cadb2\" (UID: \"1d94ac99-7c2c-4f2f-9789-fcf2365cadb2\") "
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.506653    2182 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/1d94ac99-7c2c-4f2f-9789-fcf2365cadb2-cgroup\") pod \"1d94ac99-7c2c-4f2f-9789-fcf2365cadb2\" (UID: \"1d94ac99-7c2c-4f2f-9789-fcf2365cadb2\") "
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.506681    2182 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/1d94ac99-7c2c-4f2f-9789-fcf2365cadb2-debugfs\") pod \"1d94ac99-7c2c-4f2f-9789-fcf2365cadb2\" (UID: \"1d94ac99-7c2c-4f2f-9789-fcf2365cadb2\") "
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.506714    2182 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1d94ac99-7c2c-4f2f-9789-fcf2365cadb2-run\") pod \"1d94ac99-7c2c-4f2f-9789-fcf2365cadb2\" (UID: \"1d94ac99-7c2c-4f2f-9789-fcf2365cadb2\") "
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.506867    2182 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d94ac99-7c2c-4f2f-9789-fcf2365cadb2-run" (OuterVolumeSpecName: "run") pod "1d94ac99-7c2c-4f2f-9789-fcf2365cadb2" (UID: "1d94ac99-7c2c-4f2f-9789-fcf2365cadb2"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.506919    2182 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d94ac99-7c2c-4f2f-9789-fcf2365cadb2-modules" (OuterVolumeSpecName: "modules") pod "1d94ac99-7c2c-4f2f-9789-fcf2365cadb2" (UID: "1d94ac99-7c2c-4f2f-9789-fcf2365cadb2"). InnerVolumeSpecName "modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.507540    2182 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d94ac99-7c2c-4f2f-9789-fcf2365cadb2-host" (OuterVolumeSpecName: "host") pod "1d94ac99-7c2c-4f2f-9789-fcf2365cadb2" (UID: "1d94ac99-7c2c-4f2f-9789-fcf2365cadb2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.507643    2182 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d94ac99-7c2c-4f2f-9789-fcf2365cadb2-bpffs" (OuterVolumeSpecName: "bpffs") pod "1d94ac99-7c2c-4f2f-9789-fcf2365cadb2" (UID: "1d94ac99-7c2c-4f2f-9789-fcf2365cadb2"). InnerVolumeSpecName "bpffs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.507677    2182 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d94ac99-7c2c-4f2f-9789-fcf2365cadb2-cgroup" (OuterVolumeSpecName: "cgroup") pod "1d94ac99-7c2c-4f2f-9789-fcf2365cadb2" (UID: "1d94ac99-7c2c-4f2f-9789-fcf2365cadb2"). InnerVolumeSpecName "cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.507704    2182 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d94ac99-7c2c-4f2f-9789-fcf2365cadb2-debugfs" (OuterVolumeSpecName: "debugfs") pod "1d94ac99-7c2c-4f2f-9789-fcf2365cadb2" (UID: "1d94ac99-7c2c-4f2f-9789-fcf2365cadb2"). InnerVolumeSpecName "debugfs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.511316    2182 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d94ac99-7c2c-4f2f-9789-fcf2365cadb2-kube-api-access-h7d4m" (OuterVolumeSpecName: "kube-api-access-h7d4m") pod "1d94ac99-7c2c-4f2f-9789-fcf2365cadb2" (UID: "1d94ac99-7c2c-4f2f-9789-fcf2365cadb2"). InnerVolumeSpecName "kube-api-access-h7d4m". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.607687    2182 reconciler_common.go:288] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/1d94ac99-7c2c-4f2f-9789-fcf2365cadb2-debugfs\") on node \"addons-444829\" DevicePath \"\""
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.607742    2182 reconciler_common.go:288] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1d94ac99-7c2c-4f2f-9789-fcf2365cadb2-run\") on node \"addons-444829\" DevicePath \"\""
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.607771    2182 reconciler_common.go:288] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/1d94ac99-7c2c-4f2f-9789-fcf2365cadb2-modules\") on node \"addons-444829\" DevicePath \"\""
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.607799    2182 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-h7d4m\" (UniqueName: \"kubernetes.io/projected/1d94ac99-7c2c-4f2f-9789-fcf2365cadb2-kube-api-access-h7d4m\") on node \"addons-444829\" DevicePath \"\""
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.607826    2182 reconciler_common.go:288] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/1d94ac99-7c2c-4f2f-9789-fcf2365cadb2-bpffs\") on node \"addons-444829\" DevicePath \"\""
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.607846    2182 reconciler_common.go:288] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d94ac99-7c2c-4f2f-9789-fcf2365cadb2-host\") on node \"addons-444829\" DevicePath \"\""
	Aug 29 19:10:41 addons-444829 kubelet[2182]: I0829 19:10:41.607865    2182 reconciler_common.go:288] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/1d94ac99-7c2c-4f2f-9789-fcf2365cadb2-cgroup\") on node \"addons-444829\" DevicePath \"\""
	Aug 29 19:10:42 addons-444829 kubelet[2182]: I0829 19:10:42.061879    2182 scope.go:117] "RemoveContainer" containerID="0b3ea191d99d8b3ad81b65e824c2acaa834c2238d9993a67b31827ba64f1ddc5"
	
	
	==> storage-provisioner [8ff4272abf39] <==
	I0829 18:57:48.447385       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 18:57:49.103724       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 18:57:49.103940       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 18:57:49.911372       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 18:57:49.912666       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-444829_984f6338-e484-415e-8ef2-1ba5e52c3b51!
	I0829 18:57:49.920549       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e9e4b846-2ad4-4eab-9b7d-cc1d79b52003", APIVersion:"v1", ResourceVersion:"724", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-444829_984f6338-e484-415e-8ef2-1ba5e52c3b51 became leader
	I0829 18:57:50.520853       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-444829_984f6338-e484-415e-8ef2-1ba5e52c3b51!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-444829 -n addons-444829
helpers_test.go:261: (dbg) Run:  kubectl --context addons-444829 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-jqvkr ingress-nginx-admission-patch-vgftr
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-444829 describe pod busybox ingress-nginx-admission-create-jqvkr ingress-nginx-admission-patch-vgftr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-444829 describe pod busybox ingress-nginx-admission-create-jqvkr ingress-nginx-admission-patch-vgftr: exit status 1 (119.785073ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-444829/192.168.49.2
	Start Time:       Thu, 29 Aug 2024 19:01:21 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5pwrm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5pwrm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m22s                   default-scheduler  Successfully assigned default/busybox to addons-444829
	  Warning  Failed     7m59s (x6 over 9m20s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    7m48s (x4 over 9m21s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m48s (x4 over 9m21s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m48s (x4 over 9m21s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m17s (x21 over 9m20s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jqvkr" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vgftr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-444829 describe pod busybox ingress-nginx-admission-create-jqvkr ingress-nginx-admission-patch-vgftr: exit status 1
--- FAIL: TestAddons/parallel/Registry (76.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (180.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
E0829 19:15:33.491484  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:15:33.500724  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:15:33.512341  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:15:33.534017  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:15:33.575557  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:15:33.657443  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:15:33.819139  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
E0829 19:15:36.065055  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:15:38.627410  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
2024/08/29 19:16:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
E0829 19:16:55.435004  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Non-zero exit: kubectl --context functional-984086 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}: context deadline exceeded (2.017µs)
functional_test_tunnel_test.go:245: nginx-svc svc.status.loadBalancer.ingress never got an IP: context deadline exceeded
functional_test_tunnel_test.go:246: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc
functional_test_tunnel_test.go:250: failed to kubectl get svc nginx-svc:

                                                
                                                
-- stdout --
	NAME        TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
	nginx-svc   LoadBalancer   10.104.188.229   <pending>     80:32492/TCP   3m10s

                                                
                                                
-- /stdout --
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (180.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-984086 /tmp/TestFunctionalparallelMountCmdany-port1565433330/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724958951832828348" to /tmp/TestFunctionalparallelMountCmdany-port1565433330/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724958951832828348" to /tmp/TestFunctionalparallelMountCmdany-port1565433330/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724958951832828348" to /tmp/TestFunctionalparallelMountCmdany-port1565433330/001/test-1724958951832828348
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (613.047409ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (405.390822ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0829 19:15:53.991220  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (411.472002ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (391.59713ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (380.376196ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (398.506841ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (499.599695ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:125: /mount-9p did not appear within 12.365262229s: exit status 1
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (427.131899ms)

                                                
                                                
-- stdout --
	ls: cannot access '/mount-9p': No such file or directory
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-984086 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "sudo umount -f /mount-9p": exit status 1 (396.065838ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: no mount point specified.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:92: "out/minikube-linux-amd64 -p functional-984086 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-984086 /tmp/TestFunctionalparallelMountCmdany-port1565433330/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-984086 /tmp/TestFunctionalparallelMountCmdany-port1565433330/001:/mount-9p --alsologtostderr -v=1] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-984086 /tmp/TestFunctionalparallelMountCmdany-port1565433330/001:/mount-9p --alsologtostderr -v=1] stderr:
I0829 19:15:51.975665  170475 out.go:345] Setting OutFile to fd 1 ...
I0829 19:15:51.976008  170475 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:15:51.976046  170475 out.go:358] Setting ErrFile to fd 2...
I0829 19:15:51.976070  170475 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:15:51.976522  170475 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/bin
I0829 19:15:51.977281  170475 mustload.go:65] Loading cluster: functional-984086
I0829 19:15:51.978069  170475 config.go:182] Loaded profile config "functional-984086": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:15:51.979118  170475 cli_runner.go:164] Run: docker container inspect functional-984086 --format={{.State.Status}}
I0829 19:15:52.017869  170475 host.go:66] Checking if "functional-984086" exists ...
I0829 19:15:52.018632  170475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0829 19:15:52.236262  170475 info.go:266] docker info: {ID:ed424db3-1cee-48f2-94d7-cc1f826da0cb Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:false NGoroutines:55 SystemTime:2024-08-29 19:15:52.216867255 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0829 19:15:52.236534  170475 cli_runner.go:164] Run: docker network inspect functional-984086 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0829 19:15:52.277065  170475 out.go:201] 
W0829 19:15:52.279109  170475 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0829 19:15:52.281117  170475 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (13.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (14.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-984086 /tmp/TestFunctionalparallelMountCmdspecific-port2692212433/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (740.152071ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (395.964745ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (381.156414ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (397.284086ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (462.050121ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p"
E0829 19:16:14.472764  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (411.394376ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (420.868933ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:253: /mount-9p did not appear within 14.065133971s: exit status 1
functional_test_mount_test.go:220: "TestFunctional/parallel/MountCmd/specific-port" failed, getting debug info...
functional_test_mount_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (436.403406ms)

                                                
                                                
-- stdout --
	ls: cannot access '/mount-9p': No such file or directory
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:223: debugging command "out/minikube-linux-amd64 -p functional-984086 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "sudo umount -f /mount-9p": exit status 1 (382.030635ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: no mount point specified.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-984086 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-984086 /tmp/TestFunctionalparallelMountCmdspecific-port2692212433/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-amd64 mount -p functional-984086 /tmp/TestFunctionalparallelMountCmdspecific-port2692212433/001:/mount-9p --alsologtostderr -v=1 --port 46464] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-amd64 mount -p functional-984086 /tmp/TestFunctionalparallelMountCmdspecific-port2692212433/001:/mount-9p --alsologtostderr -v=1 --port 46464] stderr:
I0829 19:16:05.272346  171147 out.go:345] Setting OutFile to fd 1 ...
I0829 19:16:05.272744  171147 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:16:05.272765  171147 out.go:358] Setting ErrFile to fd 2...
I0829 19:16:05.272776  171147 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:16:05.273300  171147 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/bin
I0829 19:16:05.273834  171147 mustload.go:65] Loading cluster: functional-984086
I0829 19:16:05.274588  171147 config.go:182] Loaded profile config "functional-984086": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:16:05.275705  171147 cli_runner.go:164] Run: docker container inspect functional-984086 --format={{.State.Status}}
I0829 19:16:05.335046  171147 host.go:66] Checking if "functional-984086" exists ...
I0829 19:16:05.335712  171147 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0829 19:16:05.639581  171147 info.go:266] docker info: {ID:ed424db3-1cee-48f2-94d7-cc1f826da0cb Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:false NGoroutines:54 SystemTime:2024-08-29 19:16:05.598623536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0829 19:16:05.640943  171147 cli_runner.go:164] Run: docker network inspect functional-984086 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0829 19:16:05.699654  171147 out.go:201] 
W0829 19:16:05.703885  171147 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0829 19:16:05.705651  171147 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/specific-port (14.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (11.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-984086 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4073446630/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-984086 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4073446630/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-984086 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4073446630/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T" /mount1: exit status 1 (1.308247092s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T" /mount1: exit status 1 (451.459623ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T" /mount1: exit status 1 (480.197139ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T" /mount1: exit status 1 (386.356746ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T" /mount1: exit status 1 (383.947054ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "findmnt -T" /mount1: exit status 1 (474.502901ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:342: mount was not ready in time: exit status 1
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-984086 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4073446630/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-984086 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4073446630/001:/mount1 --alsologtostderr -v=1] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-984086 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4073446630/001:/mount1 --alsologtostderr -v=1] stderr:
I0829 19:16:20.412399  171828 out.go:345] Setting OutFile to fd 1 ...
I0829 19:16:20.412853  171828 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:16:20.412897  171828 out.go:358] Setting ErrFile to fd 2...
I0829 19:16:20.412921  171828 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:16:20.413475  171828 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/bin
I0829 19:16:20.414034  171828 mustload.go:65] Loading cluster: functional-984086
I0829 19:16:20.414801  171828 config.go:182] Loaded profile config "functional-984086": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:16:20.432146  171828 cli_runner.go:164] Run: docker container inspect functional-984086 --format={{.State.Status}}
I0829 19:16:20.548229  171828 host.go:66] Checking if "functional-984086" exists ...
I0829 19:16:20.549026  171828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0829 19:16:21.120294  171828 info.go:266] docker info: {ID:ed424db3-1cee-48f2-94d7-cc1f826da0cb Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:false NGoroutines:54 SystemTime:2024-08-29 19:16:20.891673598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0829 19:16:21.120600  171828 cli_runner.go:164] Run: docker network inspect functional-984086 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0829 19:16:21.224289  171828 out.go:201] 
W0829 19:16:21.226741  171828 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0829 19:16:21.230013  171828 out.go:201] 
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-984086 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4073446630/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-984086 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4073446630/001:/mount2 --alsologtostderr -v=1] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-984086 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4073446630/001:/mount2 --alsologtostderr -v=1] stderr:
I0829 19:16:20.380928  171829 out.go:345] Setting OutFile to fd 1 ...
I0829 19:16:20.381944  171829 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:16:20.382020  171829 out.go:358] Setting ErrFile to fd 2...
I0829 19:16:20.382064  171829 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:16:20.382647  171829 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/bin
I0829 19:16:20.383312  171829 mustload.go:65] Loading cluster: functional-984086
I0829 19:16:20.384096  171829 config.go:182] Loaded profile config "functional-984086": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:16:20.385094  171829 cli_runner.go:164] Run: docker container inspect functional-984086 --format={{.State.Status}}
I0829 19:16:20.606190  171829 host.go:66] Checking if "functional-984086" exists ...
I0829 19:16:20.606657  171829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0829 19:16:21.128908  171829 info.go:266] docker info: {ID:ed424db3-1cee-48f2-94d7-cc1f826da0cb Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:false NGoroutines:54 SystemTime:2024-08-29 19:16:20.891673598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0829 19:16:21.129348  171829 cli_runner.go:164] Run: docker network inspect functional-984086 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0829 19:16:21.220163  171829 out.go:201] 
W0829 19:16:21.223164  171829 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0829 19:16:21.225546  171829 out.go:201] 
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-984086 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4073446630/001:/mount3 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-984086 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4073446630/001:/mount3 --alsologtostderr -v=1] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-984086 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4073446630/001:/mount3 --alsologtostderr -v=1] stderr:
I0829 19:16:20.488823  171830 out.go:345] Setting OutFile to fd 1 ...
I0829 19:16:20.489153  171830 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:16:20.489173  171830 out.go:358] Setting ErrFile to fd 2...
I0829 19:16:20.489183  171830 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:16:20.489668  171830 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/bin
I0829 19:16:20.490286  171830 mustload.go:65] Loading cluster: functional-984086
I0829 19:16:20.491012  171830 config.go:182] Loaded profile config "functional-984086": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:16:20.492140  171830 cli_runner.go:164] Run: docker container inspect functional-984086 --format={{.State.Status}}
I0829 19:16:20.690309  171830 host.go:66] Checking if "functional-984086" exists ...
I0829 19:16:20.691030  171830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0829 19:16:21.116220  171830 info.go:266] docker info: {ID:ed424db3-1cee-48f2-94d7-cc1f826da0cb Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:false NGoroutines:54 SystemTime:2024-08-29 19:16:20.891673598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0829 19:16:21.116525  171830 cli_runner.go:164] Run: docker network inspect functional-984086 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0829 19:16:21.193624  171830 out.go:201] 
W0829 19:16:21.195666  171830 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0829 19:16:21.197841  171830 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/VerifyCleanup (11.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (112.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
E0829 19:18:17.358764  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-984086 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
nginx-svc   LoadBalancer   10.104.188.229   <pending>     80:32492/TCP   5m3s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (112.68s)

                                                
                                    

Test pass (97/108)

Order passed test Duration
3 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.16
4 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.16
5 TestAddons/Setup 265.61
7 TestAddons/serial/Volcano 48.09
9 TestAddons/serial/GCPAuth/Namespaces 0.26
12 TestAddons/parallel/Ingress 23.2
13 TestAddons/parallel/InspektorGadget 12.51
14 TestAddons/parallel/MetricsServer 7
15 TestAddons/parallel/HelmTiller 11.84
17 TestAddons/parallel/CSI 48.11
18 TestAddons/parallel/Headlamp 21.26
19 TestAddons/parallel/CloudSpanner 7.87
20 TestAddons/parallel/LocalPath 14.22
21 TestAddons/parallel/NvidiaDevicePlugin 6.81
22 TestAddons/parallel/Yakd 12.28
23 TestAddons/StoppedEnableDisable 11.71
26 TestFunctional/serial/CopySyncFile 0.05
27 TestFunctional/serial/StartWithProxy 80.57
28 TestFunctional/serial/AuditLog 0
29 TestFunctional/serial/SoftStart 37.34
30 TestFunctional/serial/KubeContext 0.11
31 TestFunctional/serial/KubectlGetPods 0.14
34 TestFunctional/serial/CacheCmd/cache/add_remote 3.28
35 TestFunctional/serial/CacheCmd/cache/add_local 1.54
36 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
37 TestFunctional/serial/CacheCmd/cache/list 0.09
38 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.45
39 TestFunctional/serial/CacheCmd/cache/cache_reload 2.26
40 TestFunctional/serial/CacheCmd/cache/delete 0.19
41 TestFunctional/serial/MinikubeKubectlCmd 1.17
42 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.18
43 TestFunctional/serial/ExtraConfig 43.23
44 TestFunctional/serial/ComponentHealth 0.17
45 TestFunctional/serial/LogsCmd 1.87
46 TestFunctional/serial/LogsFileCmd 1.8
47 TestFunctional/serial/InvalidService 5.52
49 TestFunctional/parallel/ConfigCmd 1.01
50 TestFunctional/parallel/DashboardCmd 15.14
51 TestFunctional/parallel/DryRun 0.75
52 TestFunctional/parallel/InternationalLanguage 0.47
53 TestFunctional/parallel/StatusCmd 1.66
57 TestFunctional/parallel/ServiceCmdConnect 12.9
58 TestFunctional/parallel/AddonsCmd 0.24
59 TestFunctional/parallel/PersistentVolumeClaim 31.67
61 TestFunctional/parallel/SSHCmd 1.12
62 TestFunctional/parallel/CpCmd 4.33
63 TestFunctional/parallel/MySQL 44.13
64 TestFunctional/parallel/FileSync 0.42
65 TestFunctional/parallel/CertSync 2.48
69 TestFunctional/parallel/NodeLabels 0.13
71 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
73 TestFunctional/parallel/License 0.42
75 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.15
76 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
78 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.68
80 TestFunctional/parallel/ServiceCmd/DeployApp 6.35
81 TestFunctional/parallel/ServiceCmd/List 0.81
82 TestFunctional/parallel/ServiceCmd/JSONOutput 0.67
83 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
84 TestFunctional/parallel/ServiceCmd/Format 0.55
85 TestFunctional/parallel/ServiceCmd/URL 0.56
86 TestFunctional/parallel/ProfileCmd/profile_not_create 0.67
87 TestFunctional/parallel/ProfileCmd/profile_list 0.59
88 TestFunctional/parallel/ProfileCmd/profile_json_output 0.6
92 TestFunctional/parallel/DockerEnv/bash 1.67
93 TestFunctional/parallel/Version/short 0.11
94 TestFunctional/parallel/Version/components 1.68
95 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
96 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
97 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
98 TestFunctional/parallel/ImageCommands/ImageListYaml 0.37
99 TestFunctional/parallel/ImageCommands/ImageBuild 3.3
100 TestFunctional/parallel/ImageCommands/Setup 2.18
101 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.51
102 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.25
103 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.18
104 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
105 TestFunctional/parallel/ImageCommands/ImageRemove 0.67
106 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.88
107 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.6
108 TestFunctional/parallel/UpdateContextCmd/no_changes 0.33
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
110 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.26
115 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.17
116 TestFunctional/delete_echo-server_images 0.08
117 TestFunctional/delete_my-image_image 0.03
118 TestFunctional/delete_minikube_cached_images 0.05
123 TestStartStop/group/cloud-shell/serial/FirstStart 84.11
124 TestStartStop/group/cloud-shell/serial/DeployApp 9.54
125 TestStartStop/group/cloud-shell/serial/EnableAddonWhileActive 1.64
126 TestStartStop/group/cloud-shell/serial/Stop 11.23
127 TestStartStop/group/cloud-shell/serial/EnableAddonAfterStop 0.36
128 TestStartStop/group/cloud-shell/serial/SecondStart 273.69
129 TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop 6.01
130 TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop 6.14
131 TestStartStop/group/cloud-shell/serial/VerifyKubernetesImages 0.36
132 TestStartStop/group/cloud-shell/serial/Pause 4.68
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.16s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-444829
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-444829: exit status 85 (156.684131ms)

                                                
                                                
-- stdout --
	* Profile "addons-444829" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-444829"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.16s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.16s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-444829
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-444829: exit status 85 (155.457898ms)

                                                
                                                
-- stdout --
	* Profile "addons-444829" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-444829"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.16s)

                                                
                                    
x
+
TestAddons/Setup (265.61s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-444829 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-444829 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (4m25.613221814s)
--- PASS: TestAddons/Setup (265.61s)

                                                
                                    
x
+
TestAddons/serial/Volcano (48.09s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 239.145816ms
addons_test.go:913: volcano-controller stabilized in 239.294097ms
addons_test.go:905: volcano-admission stabilized in 239.506202ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-wjgjv" [9bf6e70a-a667-498c-b044-52d860731013] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.017017964s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-gdnwk" [139dede4-d637-4d84-8c35-c78d860e5fd9] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.008845107s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-96qzg" [e5843d4e-0e03-4a5f-9eb3-b8afd67028f3] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.007621155s
addons_test.go:932: (dbg) Run:  kubectl --context addons-444829 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-444829 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-444829 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [ad98864f-1560-4479-9063-a2aaba1512b5] Pending
helpers_test.go:344: "test-job-nginx-0" [ad98864f-1560-4479-9063-a2aaba1512b5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [ad98864f-1560-4479-9063-a2aaba1512b5] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 18.012925367s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-444829 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-444829 addons disable volcano --alsologtostderr -v=1: (10.961361195s)
--- PASS: TestAddons/serial/Volcano (48.09s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-444829 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-444829 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.26s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (23.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-444829 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-444829 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-444829 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7a15bde4-ee70-4213-aa4e-dfc8263ff0b9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7a15bde4-ee70-4213-aa4e-dfc8263ff0b9] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.013583659s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-444829 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-444829 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-444829 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-444829 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-444829 addons disable ingress-dns --alsologtostderr -v=1: (1.473812494s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-444829 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-444829 addons disable ingress --alsologtostderr -v=1: (8.229484374s)
--- PASS: TestAddons/parallel/Ingress (23.20s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.51s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-b66tg" [1d94ac99-7c2c-4f2f-9789-fcf2365cadb2] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.042023023s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-444829
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-444829: (7.462409512s)
--- PASS: TestAddons/parallel/InspektorGadget (12.51s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 8.754044ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-q8fls" [c596bb1f-a383-47e0-a471-19152ad102bc] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.008209931s
addons_test.go:417: (dbg) Run:  kubectl --context addons-444829 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-444829 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 27.064711ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-lm452" [b5208811-56ec-4d66-b144-d6d7814e857e] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.007609771s
addons_test.go:475: (dbg) Run:  kubectl --context addons-444829 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-444829 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.697151678s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-444829 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-linux-amd64 -p addons-444829 addons disable helm-tiller --alsologtostderr -v=1: (1.106286983s)
--- PASS: TestAddons/parallel/HelmTiller (11.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.11s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 34.382587ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-444829 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-444829 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [55f85d2b-b09b-4b98-a61b-edfd289493df] Pending
helpers_test.go:344: "task-pv-pod" [55f85d2b-b09b-4b98-a61b-edfd289493df] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [55f85d2b-b09b-4b98-a61b-edfd289493df] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.008413768s
addons_test.go:590: (dbg) Run:  kubectl --context addons-444829 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-444829 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-444829 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-444829 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-444829 delete pod task-pv-pod: (1.480172281s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-444829 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-444829 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-444829 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0f1d6828-fee4-4876-ae58-01d2ecf9e4c6] Pending
helpers_test.go:344: "task-pv-pod-restore" [0f1d6828-fee4-4876-ae58-01d2ecf9e4c6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0f1d6828-fee4-4876-ae58-01d2ecf9e4c6] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.007614857s
addons_test.go:632: (dbg) Run:  kubectl --context addons-444829 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-444829 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-444829 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-444829 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-444829 addons disable csi-hostpath-driver --alsologtostderr -v=1: (8.834236552s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-444829 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-amd64 -p addons-444829 addons disable volumesnapshots --alsologtostderr -v=1: (1.36239268s)
--- PASS: TestAddons/parallel/CSI (48.11s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-444829 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-444829 --alsologtostderr -v=1: (1.29814417s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-78f6g" [b05b1be0-186e-413e-bde3-51fe15892ca5] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-78f6g" [b05b1be0-186e-413e-bde3-51fe15892ca5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-78f6g" [b05b1be0-186e-413e-bde3-51fe15892ca5] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.005627369s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-444829 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-444829 addons disable headlamp --alsologtostderr -v=1: (5.95306349s)
--- PASS: TestAddons/parallel/Headlamp (21.26s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.87s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-8n98q" [38cdfd1e-73e5-44f6-b9f7-e979cbd7dfb6] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.008088413s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-444829
addons_test.go:870: (dbg) Done: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-444829: (1.830666422s)
--- PASS: TestAddons/parallel/CloudSpanner (7.87s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (14.22s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-444829 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-444829 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444829 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1b914a72-f2b2-42db-a3f2-7b61262b6515] Pending
helpers_test.go:344: "test-local-path" [1b914a72-f2b2-42db-a3f2-7b61262b6515] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1b914a72-f2b2-42db-a3f2-7b61262b6515] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1b914a72-f2b2-42db-a3f2-7b61262b6515] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.006520982s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-444829 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-444829 ssh "cat /opt/local-path-provisioner/pvc-df65b4a8-de5c-49dc-9cef-311d7c30c4f7_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-444829 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-444829 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-444829 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (14.22s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.81s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zvvqm" [6500c674-6c07-4c1a-9b34-9077f15f24df] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004527331s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-444829
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.81s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-sz9hc" [5a055eff-d09d-4fb9-84d6-de26ead06ba0] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005213381s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-444829 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-444829 addons disable yakd --alsologtostderr -v=1: (6.268655704s)
--- PASS: TestAddons/parallel/Yakd (12.28s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.71s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-444829
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-444829: (11.240585426s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-444829
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-444829
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-444829
--- PASS: TestAddons/StoppedEnableDisable (11.71s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/files/etc/test/nested/copy/134686/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.05s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.57s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-984086 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-984086 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m20.556098939s)
--- PASS: TestFunctional/serial/StartWithProxy (80.57s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.34s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-984086 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-984086 --alsologtostderr -v=8: (37.252556148s)
functional_test.go:663: soft start took 37.344354362s for "functional-984086" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.34s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.11s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-984086 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-984086 cache add registry.k8s.io/pause:3.1: (1.192570527s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-984086 cache add registry.k8s.io/pause:3.3: (1.20998249s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-984086 /tmp/TestFunctionalserialCacheCmdcacheadd_local2918743697/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 cache add minikube-local-cache-test:functional-984086
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 cache delete minikube-local-cache-test:functional-984086
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-984086
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (459.878479ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.19s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 kubectl -- --context functional-984086 get pods
functional_test.go:716: (dbg) Done: out/minikube-linux-amd64 -p functional-984086 kubectl -- --context functional-984086 get pods: (1.167709596s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-984086 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.18s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.23s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-984086 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-984086 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.228756929s)
functional_test.go:761: restart took 43.228909215s for "functional-984086" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.23s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-984086 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-984086 logs: (1.865649292s)
--- PASS: TestFunctional/serial/LogsCmd (1.87s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.8s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 logs --file /tmp/TestFunctionalserialLogsFileCmd4268687969/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-984086 logs --file /tmp/TestFunctionalserialLogsFileCmd4268687969/001/logs.txt: (1.793901961s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.80s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.52s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-984086 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-984086
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-984086: exit status 115 (721.987918ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31212 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_5b55102efd84289233ffc613c137836b410b4e4d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-984086 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-984086 delete -f testdata/invalidsvc.yaml: (1.468633536s)
--- PASS: TestFunctional/serial/InvalidService (5.52s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 config get cpus: exit status 14 (166.520017ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 config get cpus: exit status 14 (187.445837ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-984086 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-984086 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 172895: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.14s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-984086 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-984086 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (301.343509ms)

                                                
                                                
-- stdout --
	* [functional-984086] minikube v1.33.1 on Ubuntu 22.04 (amd64)
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19530-128633/kubeconfig
	  - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19530-128633/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_WANTUPDATENOTIFICATION=false
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:16:34.059241  172644 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:16:34.059603  172644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:16:34.059616  172644 out.go:358] Setting ErrFile to fd 2...
	I0829 19:16:34.059626  172644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:16:34.059938  172644 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/bin
	I0829 19:16:34.060578  172644 out.go:352] Setting JSON to false
	I0829 19:16:34.061823  172644 start.go:129] hostinfo: {"hostname":"cs-905301410258-default","uptime":10832,"bootTime":1724948162,"procs":91,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.1.100+","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"guest","hostId":"88b15d6b-fddc-40bb-b1ad-a90cb2566e38"}
	I0829 19:16:34.061898  172644 start.go:139] virtualization:  guest
	I0829 19:16:34.066331  172644 out.go:177] * [functional-984086] minikube v1.33.1 on Ubuntu 22.04 (amd64)
	I0829 19:16:34.071122  172644 notify.go:220] Checking for updates...
	I0829 19:16:34.075322  172644 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:16:34.079838  172644 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:16:34.083008  172644 out.go:177]   - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19530-128633/kubeconfig
	I0829 19:16:34.086399  172644 out.go:177]   - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19530-128633/.minikube
	I0829 19:16:34.089850  172644 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:16:34.093580  172644 out.go:177]   - MINIKUBE_WANTUPDATENOTIFICATION=false
	I0829 19:16:34.098932  172644 config.go:182] Loaded profile config "functional-984086": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 19:16:34.100243  172644 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:16:34.144037  172644 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0829 19:16:34.144267  172644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 19:16:34.258053  172644 info.go:266] docker info: {ID:ed424db3-1cee-48f2-94d7-cc1f826da0cb Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:false NGoroutines:54 SystemTime:2024-08-29 19:16:34.238841152 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0829 19:16:34.258249  172644 docker.go:307] overlay module found
	I0829 19:16:34.262631  172644 out.go:177] * Using the docker driver based on existing profile
	I0829 19:16:34.265609  172644 start.go:297] selected driver: docker
	I0829 19:16:34.265666  172644 start.go:901] validating driver "docker" against &{Name:functional-984086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-984086 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:16:34.265872  172644 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:16:34.269722  172644 out.go:201] 
	W0829 19:16:34.272584  172644 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0829 19:16:34.276339  172644 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-984086 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-984086 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-984086 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (471.606802ms)

                                                
                                                
-- stdout --
	* [functional-984086] minikube v1.33.1 sur Ubuntu 22.04 (amd64)
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19530-128633/kubeconfig
	  - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19530-128633/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_WANTUPDATENOTIFICATION=false
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:16:33.728381  172599 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:16:33.728706  172599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:16:33.728724  172599 out.go:358] Setting ErrFile to fd 2...
	I0829 19:16:33.728737  172599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:16:33.729502  172599 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/bin
	I0829 19:16:33.730532  172599 out.go:352] Setting JSON to false
	I0829 19:16:33.732258  172599 start.go:129] hostinfo: {"hostname":"cs-905301410258-default","uptime":10832,"bootTime":1724948162,"procs":91,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.1.100+","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"guest","hostId":"88b15d6b-fddc-40bb-b1ad-a90cb2566e38"}
	I0829 19:16:33.732405  172599 start.go:139] virtualization:  guest
	I0829 19:16:33.737555  172599 out.go:177] * [functional-984086] minikube v1.33.1 sur Ubuntu 22.04 (amd64)
	I0829 19:16:33.743195  172599 notify.go:220] Checking for updates...
	I0829 19:16:33.745143  172599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:16:33.749879  172599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:16:33.754502  172599 out.go:177]   - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19530-128633/kubeconfig
	I0829 19:16:33.760868  172599 out.go:177]   - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19530-128633/.minikube
	I0829 19:16:33.765043  172599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:16:33.769681  172599 out.go:177]   - MINIKUBE_WANTUPDATENOTIFICATION=false
	I0829 19:16:33.773974  172599 config.go:182] Loaded profile config "functional-984086": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 19:16:33.775064  172599 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:16:33.855406  172599 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0829 19:16:33.855614  172599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 19:16:33.955129  172599 info.go:266] docker info: {ID:ed424db3-1cee-48f2-94d7-cc1f826da0cb Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:false NGoroutines:54 SystemTime:2024-08-29 19:16:33.935900947 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0829 19:16:33.955331  172599 docker.go:307] overlay module found
	I0829 19:16:33.959241  172599 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0829 19:16:33.962202  172599 start.go:297] selected driver: docker
	I0829 19:16:33.962238  172599 start.go:901] validating driver "docker" against &{Name:functional-984086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-984086 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:16:33.962427  172599 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:16:33.966606  172599 out.go:201] 
	W0829 19:16:33.970199  172599 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0829 19:16:33.973610  172599 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-984086 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-984086 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-hcncs" [02eca2c1-8da2-4f1f-9f85-d8c7d28a4dff] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-hcncs" [02eca2c1-8da2-4f1f-9f85-d8c7d28a4dff] Running
E0829 19:15:34.140917  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:15:34.782999  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.005462472s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30640
functional_test.go:1675: http://192.168.49.2:30640: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-hcncs

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30640
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.90s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (31.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e409fe55-3018-4c7c-b3fe-bca37be9ac16] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00727936s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-984086 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-984086 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-984086 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-984086 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [73d224a3-94c9-4f86-b4f2-549a0e608cc5] Pending
helpers_test.go:344: "sp-pod" [73d224a3-94c9-4f86-b4f2-549a0e608cc5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [73d224a3-94c9-4f86-b4f2-549a0e608cc5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.005964761s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-984086 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-984086 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-984086 delete -f testdata/storage-provisioner/pod.yaml: (1.195074388s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-984086 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9decf343-b639-4def-8d66-1ba1b04533db] Pending
helpers_test.go:344: "sp-pod" [9decf343-b639-4def-8d66-1ba1b04533db] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9decf343-b639-4def-8d66-1ba1b04533db] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.006994279s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-984086 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (31.67s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (4.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh -n functional-984086 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 cp functional-984086:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1877230778/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh -n functional-984086 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh -n functional-984086 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (4.33s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (44.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-984086 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-dqc2p" [cd44da0f-312b-4aa7-956a-83c352593c71] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-dqc2p" [cd44da0f-312b-4aa7-956a-83c352593c71] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 33.00764125s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-984086 exec mysql-6cdb49bbb-dqc2p -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-984086 exec mysql-6cdb49bbb-dqc2p -- mysql -ppassword -e "show databases;": exit status 1 (314.413566ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-984086 exec mysql-6cdb49bbb-dqc2p -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-984086 exec mysql-6cdb49bbb-dqc2p -- mysql -ppassword -e "show databases;": exit status 1 (255.280463ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-984086 exec mysql-6cdb49bbb-dqc2p -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-984086 exec mysql-6cdb49bbb-dqc2p -- mysql -ppassword -e "show databases;": exit status 1 (367.639813ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-984086 exec mysql-6cdb49bbb-dqc2p -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-984086 exec mysql-6cdb49bbb-dqc2p -- mysql -ppassword -e "show databases;": exit status 1 (227.193173ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-984086 exec mysql-6cdb49bbb-dqc2p -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (44.13s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/134686/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "sudo cat /etc/test/nested/copy/134686/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/134686.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "sudo cat /etc/ssl/certs/134686.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/134686.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "sudo cat /usr/share/ca-certificates/134686.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/1346862.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "sudo cat /etc/ssl/certs/1346862.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/1346862.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "sudo cat /usr/share/ca-certificates/1346862.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-984086 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh "sudo systemctl is-active crio": exit status 1 (416.461904ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-984086 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-984086 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-984086 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 168280: os: process already finished
helpers_test.go:502: unable to terminate pid 168101: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-984086 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-984086 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-984086 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b17d1748-d46e-43e6-9c8d-323a0f012b7e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b17d1748-d46e-43e6-9c8d-323a0f012b7e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.014543449s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-984086 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-984086 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-l4hmp" [3c8c3ab5-9dd6-4bac-a998-578c0dbe9270] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-l4hmp" [3c8c3ab5-9dd6-4bac-a998-578c0dbe9270] Running
E0829 19:15:43.749266  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.005520383s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 service list -o json
functional_test.go:1494: Took "669.578644ms" to run "out/minikube-linux-amd64 -p functional-984086 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30392
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30392
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "489.955022ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "103.948021ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "495.089601ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "100.13468ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-984086 docker-env) && out/minikube-linux-amd64 status -p functional-984086"
functional_test.go:499: (dbg) Done: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-984086 docker-env) && out/minikube-linux-amd64 status -p functional-984086": (1.118917378s)
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-984086 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-amd64 -p functional-984086 version -o=json --components: (1.679956313s)
--- PASS: TestFunctional/parallel/Version/components (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-984086 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-984086
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-984086
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-984086 image ls --format short --alsologtostderr:
I0829 19:17:51.563067  175531 out.go:345] Setting OutFile to fd 1 ...
I0829 19:17:51.563347  175531 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:17:51.563360  175531 out.go:358] Setting ErrFile to fd 2...
I0829 19:17:51.563369  175531 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:17:51.564139  175531 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/bin
I0829 19:17:51.566098  175531 config.go:182] Loaded profile config "functional-984086": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:17:51.566442  175531 config.go:182] Loaded profile config "functional-984086": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:17:51.567911  175531 cli_runner.go:164] Run: docker container inspect functional-984086 --format={{.State.Status}}
I0829 19:17:51.601524  175531 ssh_runner.go:195] Run: systemctl --version
I0829 19:17:51.601642  175531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-984086
I0829 19:17:51.632504  175531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/functional-984086/id_rsa Username:docker}
I0829 19:17:51.730688  175531 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-984086 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-scheduler              | v1.31.0           | 1766f54c897f0 | 67.4MB |
| docker.io/kicbase/echo-server               | functional-984086 | 9056ab77afb8e | 4.94MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-984086 | 160c70786fb91 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | 045733566833c | 88.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.0           | ad83b2ca7b09e | 91.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/kube-apiserver              | v1.31.0           | 604f5db92eaa8 | 94.2MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine            | 0f0eda053dc5c | 43.3MB |
| docker.io/library/nginx                     | latest            | 5ef79149e0ec8 | 188MB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-984086 image ls --format table --alsologtostderr:
I0829 19:17:52.155655  175597 out.go:345] Setting OutFile to fd 1 ...
I0829 19:17:52.155976  175597 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:17:52.156028  175597 out.go:358] Setting ErrFile to fd 2...
I0829 19:17:52.156049  175597 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:17:52.156398  175597 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/bin
I0829 19:17:52.157349  175597 config.go:182] Loaded profile config "functional-984086": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:17:52.157574  175597 config.go:182] Loaded profile config "functional-984086": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:17:52.158346  175597 cli_runner.go:164] Run: docker container inspect functional-984086 --format={{.State.Status}}
I0829 19:17:52.190548  175597 ssh_runner.go:195] Run: systemctl --version
I0829 19:17:52.190792  175597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-984086
I0829 19:17:52.233206  175597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/functional-984086/id_rsa Username:docker}
I0829 19:17:52.343796  175597 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-984086 image ls --format json --alsologtostderr:
[{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"91500000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"88400000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"9056ab77afb8e18e04303f11000
a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-984086"],"size":"4940000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43300000"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"94200000"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67400000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker
.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"160c70786fb91ca3152c6507c146bf52e30fada51d5ec3a015a1a87459f4dbbd","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-984086"],"size":"30"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}
]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-984086 image ls --format json --alsologtostderr:
I0829 19:17:51.862742  175564 out.go:345] Setting OutFile to fd 1 ...
I0829 19:17:51.863046  175564 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:17:51.863090  175564 out.go:358] Setting ErrFile to fd 2...
I0829 19:17:51.863112  175564 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:17:51.863375  175564 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/bin
I0829 19:17:51.864160  175564 config.go:182] Loaded profile config "functional-984086": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:17:51.864411  175564 config.go:182] Loaded profile config "functional-984086": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:17:51.865130  175564 cli_runner.go:164] Run: docker container inspect functional-984086 --format={{.State.Status}}
I0829 19:17:51.893266  175564 ssh_runner.go:195] Run: systemctl --version
I0829 19:17:51.893447  175564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-984086
I0829 19:17:51.921231  175564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/functional-984086/id_rsa Username:docker}
I0829 19:17:52.017862  175564 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-984086 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "94200000"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67400000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 160c70786fb91ca3152c6507c146bf52e30fada51d5ec3a015a1a87459f4dbbd
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-984086
size: "30"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "91500000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-984086
size: "4940000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43300000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "88400000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-984086 image ls --format yaml --alsologtostderr:
I0829 19:17:51.193066  175497 out.go:345] Setting OutFile to fd 1 ...
I0829 19:17:51.194233  175497 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:17:51.194256  175497 out.go:358] Setting ErrFile to fd 2...
I0829 19:17:51.194267  175497 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:17:51.194645  175497 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/bin
I0829 19:17:51.220432  175497 config.go:182] Loaded profile config "functional-984086": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:17:51.220752  175497 config.go:182] Loaded profile config "functional-984086": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:17:51.221671  175497 cli_runner.go:164] Run: docker container inspect functional-984086 --format={{.State.Status}}
I0829 19:17:51.250092  175497 ssh_runner.go:195] Run: systemctl --version
I0829 19:17:51.250255  175497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-984086
I0829 19:17:51.296160  175497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/functional-984086/id_rsa Username:docker}
I0829 19:17:51.401615  175497 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-984086 ssh pgrep buildkitd: exit status 1 (420.493234ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 image build -t localhost/my-image:functional-984086 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-984086 image build -t localhost/my-image:functional-984086 testdata/build --alsologtostderr: (2.53371984s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-984086 image build -t localhost/my-image:functional-984086 testdata/build --alsologtostderr:
I0829 19:17:52.900340  175713 out.go:345] Setting OutFile to fd 1 ...
I0829 19:17:52.901416  175713 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:17:52.901513  175713 out.go:358] Setting ErrFile to fd 2...
I0829 19:17:52.901535  175713 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:17:52.901851  175713 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/bin
I0829 19:17:52.902744  175713 config.go:182] Loaded profile config "functional-984086": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:17:52.925568  175713 config.go:182] Loaded profile config "functional-984086": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:17:52.926729  175713 cli_runner.go:164] Run: docker container inspect functional-984086 --format={{.State.Status}}
I0829 19:17:52.955383  175713 ssh_runner.go:195] Run: systemctl --version
I0829 19:17:52.955478  175713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-984086
I0829 19:17:52.982468  175713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19530-128633/.minikube/machines/functional-984086/id_rsa Username:docker}
I0829 19:17:53.080065  175713 build_images.go:161] Building image from path: /tmp/build.3933464882.tar
I0829 19:17:53.080199  175713 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0829 19:17:53.097781  175713 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3933464882.tar
I0829 19:17:53.104274  175713 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3933464882.tar: stat -c "%s %y" /var/lib/minikube/build/build.3933464882.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3933464882.tar': No such file or directory
I0829 19:17:53.104394  175713 ssh_runner.go:362] scp /tmp/build.3933464882.tar --> /var/lib/minikube/build/build.3933464882.tar (3072 bytes)
I0829 19:17:53.147854  175713 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3933464882
I0829 19:17:53.163715  175713 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3933464882 -xf /var/lib/minikube/build/build.3933464882.tar
I0829 19:17:53.180665  175713 docker.go:360] Building image: /var/lib/minikube/build/build.3933464882
I0829 19:17:53.180825  175713 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-984086 /var/lib/minikube/build/build.3933464882
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:9a3c67dd46e4fa45f8655b8057707715c192cfa0be0ea370cf63161c8c795260 done
#8 naming to localhost/my-image:functional-984086 done
#8 DONE 0.1s
I0829 19:17:55.281829  175713 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-984086 /var/lib/minikube/build/build.3933464882: (2.100958291s)
I0829 19:17:55.282023  175713 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3933464882
I0829 19:17:55.312478  175713 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3933464882.tar
I0829 19:17:55.334533  175713 build_images.go:217] Built localhost/my-image:functional-984086 from /tmp/build.3933464882.tar
I0829 19:17:55.334585  175713 build_images.go:133] succeeded building to: functional-984086
I0829 19:17:55.334595  175713 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.145924715s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-984086
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 image load --daemon kicbase/echo-server:functional-984086 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-984086 image load --daemon kicbase/echo-server:functional-984086 --alsologtostderr: (1.20278082s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 image load --daemon kicbase/echo-server:functional-984086 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Done: docker pull kicbase/echo-server:latest: (1.02881327s)
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-984086
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 image load --daemon kicbase/echo-server:functional-984086 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 image save kicbase/echo-server:functional-984086 /home/g528047478195_compute/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 image rm kicbase/echo-server:functional-984086 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 image load /home/g528047478195_compute/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-984086
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 image save --daemon kicbase/echo-server:functional-984086 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-984086
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-984086 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-984086 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-984086
--- PASS: TestFunctional/delete_echo-server_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-984086
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-984086
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/FirstStart (84.11s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p cloud-shell-701155 --memory=2200 --alsologtostderr --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0829 19:20:33.490879  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:21:01.200575  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p cloud-shell-701155 --memory=2200 --alsologtostderr --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (1m24.108460427s)
--- PASS: TestStartStop/group/cloud-shell/serial/FirstStart (84.11s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/DeployApp (9.54s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context cloud-shell-701155 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/cloud-shell/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [59cb4a5c-eb3b-4270-949c-60a64d70f7f9] Pending
helpers_test.go:344: "busybox" [59cb4a5c-eb3b-4270-949c-60a64d70f7f9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [59cb4a5c-eb3b-4270-949c-60a64d70f7f9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/cloud-shell/serial/DeployApp: integration-test=busybox healthy within 9.007115724s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context cloud-shell-701155 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/cloud-shell/serial/DeployApp (9.54s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/EnableAddonWhileActive (1.64s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p cloud-shell-701155 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p cloud-shell-701155 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.345257282s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context cloud-shell-701155 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/cloud-shell/serial/EnableAddonWhileActive (1.64s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/Stop (11.23s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p cloud-shell-701155 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p cloud-shell-701155 --alsologtostderr -v=3: (11.224934382s)
--- PASS: TestStartStop/group/cloud-shell/serial/Stop (11.23s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p cloud-shell-701155 -n cloud-shell-701155
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p cloud-shell-701155 -n cloud-shell-701155: exit status 7 (161.572218ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p cloud-shell-701155 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/cloud-shell/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/SecondStart (273.69s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p cloud-shell-701155 --memory=2200 --alsologtostderr --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0829 19:24:54.608410  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/functional-984086/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:24:54.615268  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/functional-984086/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:24:54.627084  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/functional-984086/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:24:54.648850  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/functional-984086/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:24:54.690682  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/functional-984086/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:24:54.772525  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/functional-984086/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:24:54.957976  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/functional-984086/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:24:55.280371  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/functional-984086/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:24:55.922826  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/functional-984086/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:24:57.204659  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/functional-984086/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:24:59.767061  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/functional-984086/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:25:04.889409  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/functional-984086/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:25:15.131365  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/functional-984086/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:25:33.490490  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/addons-444829/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:25:35.612886  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/functional-984086/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:26:16.574625  134686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19530-128633/.minikube/profiles/functional-984086/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p cloud-shell-701155 --memory=2200 --alsologtostderr --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m32.974838603s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p cloud-shell-701155 -n cloud-shell-701155
--- PASS: TestStartStop/group/cloud-shell/serial/SecondStart (273.69s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-678vs" [9c1f523d-fac6-4144-9792-002daa17964c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007110445s
--- PASS: TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-678vs" [9c1f523d-fac6-4144-9792-002daa17964c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005901981s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context cloud-shell-701155 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p cloud-shell-701155 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/cloud-shell/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/Pause (4.68s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p cloud-shell-701155 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p cloud-shell-701155 --alsologtostderr -v=1: (1.178867789s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p cloud-shell-701155 -n cloud-shell-701155
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p cloud-shell-701155 -n cloud-shell-701155: exit status 2 (480.131035ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p cloud-shell-701155 -n cloud-shell-701155
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p cloud-shell-701155 -n cloud-shell-701155: exit status 2 (520.946643ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p cloud-shell-701155 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p cloud-shell-701155 -n cloud-shell-701155
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p cloud-shell-701155 -n cloud-shell-701155
--- PASS: TestStartStop/group/cloud-shell/serial/Pause (4.68s)

                                                
                                    

Test skip (5/108)

x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
Copied to clipboard