Test Report: Docker_Cloud_Shell 19664

                    
                      b0eadc949d6b6708e1f550519f8385f72d7fe4f5:2024-09-19:36285
                    
                

Test fail (6/108)

x
+
TestAddons/parallel/Registry (76.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 6.064948ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
I0919 18:54:34.476872    7874 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-dvhvr" [4659f1bd-f229-47b9-8db3-7f0ad80e4e86] Running
I0919 18:54:34.484672    7874 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0919 18:54:34.484724    7874 kapi.go:107] duration metric: took 26.437328ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005031642s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7c4lm" [0c2f5817-bdcc-4c04-b033-af85ade76356] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005096771s
addons_test.go:342: (dbg) Run:  kubectl --context addons-189999 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-189999 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-189999 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.133484539s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-189999 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-189999 ip
2024/09/19 18:55:46 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-189999 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-189999
helpers_test.go:235: (dbg) docker inspect addons-189999:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48fd889d9663fd59aeab306a0961491b857bddd58770fdfceeec308a27b76092",
	        "Created": "2024-09-19T18:42:09.061069408Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8377,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-19T18:42:09.262819504Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bb3bcbaabeeeadbf6b43ae7d1d07e504b3c8a94ec024df89bcb237eba4f5e9b3",
	        "ResolvConfPath": "/var/lib/docker/containers/48fd889d9663fd59aeab306a0961491b857bddd58770fdfceeec308a27b76092/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48fd889d9663fd59aeab306a0961491b857bddd58770fdfceeec308a27b76092/hostname",
	        "HostsPath": "/var/lib/docker/containers/48fd889d9663fd59aeab306a0961491b857bddd58770fdfceeec308a27b76092/hosts",
	        "LogPath": "/var/lib/docker/containers/48fd889d9663fd59aeab306a0961491b857bddd58770fdfceeec308a27b76092/48fd889d9663fd59aeab306a0961491b857bddd58770fdfceeec308a27b76092-json.log",
	        "Name": "/addons-189999",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-189999:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-189999",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4544fa34b4e266ffc639a8d3c8235fdd548e1a5d110f6351415f1288eb199251-init/diff:/var/lib/docker/overlay2/73ea53a5bdb8d0792b8aeaeeb277a5e698a65bb883d263ce5672d7261458ddb8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4544fa34b4e266ffc639a8d3c8235fdd548e1a5d110f6351415f1288eb199251/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4544fa34b4e266ffc639a8d3c8235fdd548e1a5d110f6351415f1288eb199251/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4544fa34b4e266ffc639a8d3c8235fdd548e1a5d110f6351415f1288eb199251/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-189999",
	                "Source": "/var/lib/docker/volumes/addons-189999/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-189999",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-189999",
	                "name.minikube.sigs.k8s.io": "addons-189999",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "61365e70afcf5da19ebb6f3f8972f92b513ca6e68d74ac0853646d682bc78d28",
	            "SandboxKey": "/var/run/docker/netns/61365e70afcf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-189999": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2bd7a1814a346a4daf97c830dd466998ebb63ce0604cc0d7ee8d25ef3d0f3ef1",
	                    "EndpointID": "436fd6f838ada53b481936f2c5a4006886eb0ef21a57b4764f88e8aa3de5fd0d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-189999",
	                        "48fd889d9663"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-189999 -n addons-189999
helpers_test.go:239: (dbg) Done: out/minikube-linux-amd64 status --format={{.Host}} -p addons-189999 -n addons-189999: (1.026306091s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-189999 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-189999 logs -n 25: (1.884533858s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|---------------|-----------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |    Profile    |         User          | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|---------------|-----------------------|---------|---------------------|---------------------|
	| addons  | disable dashboard -p                 | addons-189999 | g528047478195_compute | v1.34.0 | 19 Sep 24 18:41 UTC |                     |
	|         | addons-189999                        |               |                       |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-189999 | g528047478195_compute | v1.34.0 | 19 Sep 24 18:41 UTC |                     |
	|         | addons-189999                        |               |                       |         |                     |                     |
	| start   | -p addons-189999 --wait=true         | addons-189999 | g528047478195_compute | v1.34.0 | 19 Sep 24 18:41 UTC | 19 Sep 24 18:45 UTC |
	|         | --memory=4000 --alsologtostderr      |               |                       |         |                     |                     |
	|         | --addons=registry                    |               |                       |         |                     |                     |
	|         | --addons=metrics-server              |               |                       |         |                     |                     |
	|         | --addons=volumesnapshots             |               |                       |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |               |                       |         |                     |                     |
	|         | --addons=gcp-auth                    |               |                       |         |                     |                     |
	|         | --addons=cloud-spanner               |               |                       |         |                     |                     |
	|         | --addons=inspektor-gadget            |               |                       |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |               |                       |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |               |                       |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |               |                       |         |                     |                     |
	|         | --driver=docker                      |               |                       |         |                     |                     |
	|         | --container-runtime=docker           |               |                       |         |                     |                     |
	|         | --addons=ingress                     |               |                       |         |                     |                     |
	|         | --addons=ingress-dns                 |               |                       |         |                     |                     |
	|         | --addons=helm-tiller                 |               |                       |         |                     |                     |
	| addons  | addons-189999 addons disable         | addons-189999 | g528047478195_compute | v1.34.0 | 19 Sep 24 18:46 UTC | 19 Sep 24 18:46 UTC |
	|         | volcano --alsologtostderr -v=1       |               |                       |         |                     |                     |
	| addons  | addons-189999 addons                 | addons-189999 | g528047478195_compute | v1.34.0 | 19 Sep 24 18:55 UTC | 19 Sep 24 18:55 UTC |
	|         | disable csi-hostpath-driver          |               |                       |         |                     |                     |
	|         | --alsologtostderr -v=1               |               |                       |         |                     |                     |
	| addons  | addons-189999 addons                 | addons-189999 | g528047478195_compute | v1.34.0 | 19 Sep 24 18:55 UTC | 19 Sep 24 18:55 UTC |
	|         | disable volumesnapshots              |               |                       |         |                     |                     |
	|         | --alsologtostderr -v=1               |               |                       |         |                     |                     |
	| addons  | addons-189999 addons disable         | addons-189999 | g528047478195_compute | v1.34.0 | 19 Sep 24 18:55 UTC | 19 Sep 24 18:55 UTC |
	|         | helm-tiller --alsologtostderr        |               |                       |         |                     |                     |
	|         | -v=1                                 |               |                       |         |                     |                     |
	| addons  | addons-189999 addons                 | addons-189999 | g528047478195_compute | v1.34.0 | 19 Sep 24 18:55 UTC | 19 Sep 24 18:55 UTC |
	|         | disable metrics-server               |               |                       |         |                     |                     |
	|         | --alsologtostderr -v=1               |               |                       |         |                     |                     |
	| ip      | addons-189999 ip                     | addons-189999 | g528047478195_compute | v1.34.0 | 19 Sep 24 18:55 UTC | 19 Sep 24 18:55 UTC |
	| addons  | addons-189999 addons disable         | addons-189999 | g528047478195_compute | v1.34.0 | 19 Sep 24 18:55 UTC | 19 Sep 24 18:55 UTC |
	|         | registry --alsologtostderr           |               |                       |         |                     |                     |
	|         | -v=1                                 |               |                       |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-189999 | g528047478195_compute | v1.34.0 | 19 Sep 24 18:55 UTC |                     |
	|         | addons-189999                        |               |                       |         |                     |                     |
	|---------|--------------------------------------|---------------|-----------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:41:19
	Running on machine: cs-905301410258-default
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:41:19.129722    7893 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:41:19.129975    7893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:41:19.129992    7893 out.go:358] Setting ErrFile to fd 2...
	I0919 18:41:19.130004    7893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:41:19.130293    7893 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/bin
	W0919 18:41:19.130579    7893 root.go:314] Error reading config file at /home/g528047478195_compute/minikube-integration/19664-430/.minikube/config/config.json: open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/config/config.json: no such file or directory
	I0919 18:41:19.131236    7893 out.go:352] Setting JSON to false
	I0919 18:41:19.133907    7893 start.go:129] hostinfo: {"hostname":"cs-905301410258-default","uptime":1391,"bootTime":1726769888,"procs":20,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.1.100+","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"guest","hostId":"88b15d6b-fddc-40bb-b1ad-a90cb2566e38"}
	I0919 18:41:19.134000    7893 start.go:139] virtualization:  guest
	I0919 18:41:19.138758    7893 out.go:177] * [addons-189999] minikube v1.34.0 on Ubuntu 22.04 (amd64)
	W0919 18:41:19.142248    7893 preload.go:293] Failed to list preload files: open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 18:41:19.142305    7893 notify.go:220] Checking for updates...
	I0919 18:41:19.142447    7893 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 18:41:19.145742    7893 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:41:19.148788    7893 out.go:177]   - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19664-430/kubeconfig
	I0919 18:41:19.151675    7893 out.go:177]   - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19664-430/.minikube
	I0919 18:41:19.155013    7893 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 18:41:19.158090    7893 out.go:177]   - MINIKUBE_WANTUPDATENOTIFICATION=false
	I0919 18:41:19.161185    7893 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:41:19.206908    7893 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0919 18:41:19.207048    7893 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:41:19.307747    7893 info.go:266] docker info: {ID:084b1885-1b65-4927-baf7-da2e440f52c1 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:false NGoroutines:59 SystemTime:2024-09-19 18:41:19.291682149 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337174528 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 18:41:19.308024    7893 docker.go:318] overlay module found
	I0919 18:41:19.311693    7893 out.go:177] * Using the docker driver based on user configuration
	I0919 18:41:19.314675    7893 start.go:297] selected driver: docker
	I0919 18:41:19.314711    7893 start.go:901] validating driver "docker" against <nil>
	I0919 18:41:19.314738    7893 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 18:41:19.315413    7893 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:41:19.409190    7893 info.go:266] docker info: {ID:084b1885-1b65-4927-baf7-da2e440f52c1 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:false NGoroutines:59 SystemTime:2024-09-19 18:41:19.391803682 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337174528 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 18:41:19.409435    7893 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:41:19.410365    7893 start_flags.go:421] setting extra-config: kubelet.cgroups-per-qos=false
	I0919 18:41:19.410390    7893 start_flags.go:421] setting extra-config: kubelet.enforce-node-allocatable=""
	I0919 18:41:19.410445    7893 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:41:19.414325    7893 out.go:177] * Using Docker driver with root privileges
	I0919 18:41:19.417572    7893 cni.go:84] Creating CNI manager for ""
	I0919 18:41:19.417695    7893 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 18:41:19.417715    7893 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 18:41:19.417874    7893 start.go:340] cluster config:
	{Name:addons-189999 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-189999 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:41:19.421812    7893 out.go:177] * Starting "addons-189999" primary control-plane node in "addons-189999" cluster
	I0919 18:41:19.424738    7893 cache.go:121] Beginning downloading kic base image for docker with docker
	I0919 18:41:19.427698    7893 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0919 18:41:19.430297    7893 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 18:41:19.430393    7893 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0919 18:41:19.456623    7893 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0919 18:41:19.457630    7893 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0919 18:41:19.458024    7893 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0919 18:41:19.459338    7893 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0919 18:41:19.459414    7893 cache.go:56] Caching tarball of preloaded images
	I0919 18:41:19.459735    7893 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 18:41:19.463043    7893 out.go:177] * Downloading Kubernetes v1.31.1 preload ...
	I0919 18:41:19.466318    7893 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0919 18:41:19.492931    7893 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /home/g528047478195_compute/minikube-integration/19664-430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0919 18:41:23.943150    7893 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0919 18:41:23.943397    7893 preload.go:254] verifying checksum of /home/g528047478195_compute/minikube-integration/19664-430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0919 18:41:25.486735    7893 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 18:41:25.487275    7893 profile.go:143] Saving config to /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/config.json ...
	I0919 18:41:25.487351    7893 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/config.json: {Name:mk4bcb0f5e3cbfe2c55d53ac31d6ed1167fd67b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:41:29.578688    7893 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0919 18:41:29.578710    7893 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0919 18:41:56.712963    7893 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0919 18:41:56.713034    7893 cache.go:194] Successfully downloaded all kic artifacts
	I0919 18:41:56.713124    7893 start.go:360] acquireMachinesLock for addons-189999: {Name:mk77d830e61dd5f1c26b6a1a68a77bbb8d0805a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:41:56.713576    7893 start.go:364] duration metric: took 394.162µs to acquireMachinesLock for "addons-189999"
	I0919 18:41:56.713664    7893 start.go:93] Provisioning new machine with config: &{Name:addons-189999 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-189999 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 18:41:56.713958    7893 start.go:125] createHost starting for "" (driver="docker")
	I0919 18:41:56.718730    7893 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0919 18:41:56.719239    7893 start.go:159] libmachine.API.Create for "addons-189999" (driver="docker")
	I0919 18:41:56.719280    7893 client.go:168] LocalClient.Create starting
	I0919 18:41:56.719464    7893 main.go:141] libmachine: Creating CA: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/certs/ca.pem
	I0919 18:41:57.075031    7893 main.go:141] libmachine: Creating client certificate: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/certs/cert.pem
	I0919 18:41:57.289342    7893 cli_runner.go:164] Run: docker network inspect addons-189999 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 18:41:57.315057    7893 cli_runner.go:211] docker network inspect addons-189999 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 18:41:57.315226    7893 network_create.go:284] running [docker network inspect addons-189999] to gather additional debugging logs...
	I0919 18:41:57.315262    7893 cli_runner.go:164] Run: docker network inspect addons-189999
	W0919 18:41:57.342989    7893 cli_runner.go:211] docker network inspect addons-189999 returned with exit code 1
	I0919 18:41:57.343032    7893 network_create.go:287] error running [docker network inspect addons-189999]: docker network inspect addons-189999: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-189999 not found
	I0919 18:41:57.343086    7893 network_create.go:289] output of [docker network inspect addons-189999]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-189999 not found
	
	** /stderr **
	I0919 18:41:57.343291    7893 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 18:41:57.369496    7893 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0169375a0}
	I0919 18:41:57.369587    7893 network_create.go:124] attempt to create docker network addons-189999 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1460 ...
	I0919 18:41:57.369698    7893 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1460 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-189999 addons-189999
	I0919 18:41:57.478633    7893 network_create.go:108] docker network addons-189999 192.168.49.0/24 created
	I0919 18:41:57.478679    7893 kic.go:121] calculated static IP "192.168.49.2" for the "addons-189999" container
	I0919 18:41:57.478862    7893 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 18:41:57.505023    7893 cli_runner.go:164] Run: docker volume create addons-189999 --label name.minikube.sigs.k8s.io=addons-189999 --label created_by.minikube.sigs.k8s.io=true
	I0919 18:41:57.533798    7893 oci.go:103] Successfully created a docker volume addons-189999
	I0919 18:41:57.533957    7893 cli_runner.go:164] Run: docker run --rm --name addons-189999-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-189999 --entrypoint /usr/bin/test -v addons-189999:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0919 18:42:00.634646    7893 cli_runner.go:217] Completed: docker run --rm --name addons-189999-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-189999 --entrypoint /usr/bin/test -v addons-189999:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (3.100631799s)
	I0919 18:42:00.634687    7893 oci.go:107] Successfully prepared a docker volume addons-189999
	I0919 18:42:00.634725    7893 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 18:42:00.634771    7893 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 18:42:00.635210    7893 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/g528047478195_compute/minikube-integration/19664-430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-189999:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 18:42:08.939784    7893 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/g528047478195_compute/minikube-integration/19664-430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-189999:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (8.304493717s)
	I0919 18:42:08.939837    7893 kic.go:203] duration metric: took 8.305060228s to extract preloaded images to volume ...
	W0919 18:42:08.940063    7893 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 18:42:08.940155    7893 oci.go:243] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 18:42:08.940260    7893 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 18:42:09.038139    7893 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-189999 --name addons-189999 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-189999 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-189999 --network addons-189999 --ip 192.168.49.2 --volume addons-189999:/var --security-opt apparmor=unconfined --memory=4000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0919 18:42:09.504716    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Running}}
	I0919 18:42:09.562004    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:09.610092    7893 cli_runner.go:164] Run: docker exec addons-189999 stat /var/lib/dpkg/alternatives/iptables
	I0919 18:42:09.722977    7893 oci.go:144] the created container "addons-189999" has a running status.
	I0919 18:42:09.723021    7893 kic.go:225] Creating ssh key for kic: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa...
	I0919 18:42:10.267970    7893 kic_runner.go:191] docker (temp): /home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 18:42:10.346979    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:10.405515    7893 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 18:42:10.405606    7893 kic_runner.go:114] Args: [docker exec --privileged addons-189999 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 18:42:10.573144    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:10.670400    7893 machine.go:93] provisionDockerMachine start ...
	I0919 18:42:10.670569    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:10.743482    7893 main.go:141] libmachine: Using SSH client type: native
	I0919 18:42:10.743994    7893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0919 18:42:10.744026    7893 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 18:42:11.059009    7893 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-189999
	
	I0919 18:42:11.059049    7893 ubuntu.go:169] provisioning hostname "addons-189999"
	I0919 18:42:11.059226    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:11.113379    7893 main.go:141] libmachine: Using SSH client type: native
	I0919 18:42:11.113722    7893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0919 18:42:11.113748    7893 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-189999 && echo "addons-189999" | sudo tee /etc/hostname
	I0919 18:42:11.327278    7893 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-189999
	
	I0919 18:42:11.327434    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:11.368382    7893 main.go:141] libmachine: Using SSH client type: native
	I0919 18:42:11.368709    7893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0919 18:42:11.368748    7893 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-189999' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-189999/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-189999' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 18:42:11.544244    7893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 18:42:11.544281    7893 ubuntu.go:175] set auth options {CertDir:/home/g528047478195_compute/minikube-integration/19664-430/.minikube CaCertPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/certs/ca.pem CaPrivateKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/server.pem ServerKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/server-key.pem ClientKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube}
	I0919 18:42:11.544324    7893 ubuntu.go:177] setting up certificates
	I0919 18:42:11.544342    7893 provision.go:84] configureAuth start
	I0919 18:42:11.544453    7893 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-189999
	I0919 18:42:11.573814    7893 provision.go:143] copyHostCerts
	I0919 18:42:11.573944    7893 exec_runner.go:151] cp: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/certs/key.pem --> /home/g528047478195_compute/minikube-integration/19664-430/.minikube/key.pem (1675 bytes)
	I0919 18:42:11.574134    7893 exec_runner.go:151] cp: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/certs/ca.pem --> /home/g528047478195_compute/minikube-integration/19664-430/.minikube/ca.pem (1119 bytes)
	I0919 18:42:11.574286    7893 exec_runner.go:151] cp: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/certs/cert.pem --> /home/g528047478195_compute/minikube-integration/19664-430/.minikube/cert.pem (1164 bytes)
	I0919 18:42:11.574402    7893 provision.go:117] generating server cert: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/server.pem ca-key=/home/g528047478195_compute/minikube-integration/19664-430/.minikube/certs/ca.pem private-key=/home/g528047478195_compute/minikube-integration/19664-430/.minikube/certs/ca-key.pem org=g528047478195_compute.addons-189999 san=[127.0.0.1 192.168.49.2 addons-189999 localhost minikube]
	I0919 18:42:12.111389    7893 provision.go:177] copyRemoteCerts
	I0919 18:42:12.111502    7893 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 18:42:12.111589    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:12.139035    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	I0919 18:42:12.247386    7893 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19664-430/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1119 bytes)
	I0919 18:42:12.287886    7893 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0919 18:42:12.326657    7893 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 18:42:12.365485    7893 provision.go:87] duration metric: took 821.121009ms to configureAuth
	I0919 18:42:12.365526    7893 ubuntu.go:193] setting minikube options for container-runtime
	I0919 18:42:12.365978    7893 config.go:182] Loaded profile config "addons-189999": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 18:42:12.366125    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:12.399545    7893 main.go:141] libmachine: Using SSH client type: native
	I0919 18:42:12.399827    7893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0919 18:42:12.399874    7893 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 18:42:12.555961    7893 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 18:42:12.556074    7893 ubuntu.go:71] root file system type: overlay
	I0919 18:42:12.556581    7893 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 18:42:12.556834    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:12.584618    7893 main.go:141] libmachine: Using SSH client type: native
	I0919 18:42:12.584944    7893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0919 18:42:12.585091    7893 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 18:42:12.755825    7893 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 18:42:12.755992    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:12.784248    7893 main.go:141] libmachine: Using SSH client type: native
	I0919 18:42:12.784524    7893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0919 18:42:12.784564    7893 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 18:42:13.895584    7893 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:41.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-19 18:42:12.751991580 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 18:42:13.895645    7893 machine.go:96] duration metric: took 3.225208217s to provisionDockerMachine
	I0919 18:42:13.895664    7893 client.go:171] duration metric: took 17.176370202s to LocalClient.Create
	I0919 18:42:13.895691    7893 start.go:167] duration metric: took 17.176456848s to libmachine.API.Create "addons-189999"
	I0919 18:42:13.895707    7893 start.go:293] postStartSetup for "addons-189999" (driver="docker")
	I0919 18:42:13.895738    7893 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 18:42:13.895890    7893 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 18:42:13.895985    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:13.924895    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	I0919 18:42:14.032980    7893 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 18:42:14.038480    7893 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 18:42:14.038531    7893 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 18:42:14.038548    7893 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 18:42:14.038574    7893 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 18:42:14.038593    7893 filesync.go:126] Scanning /home/g528047478195_compute/minikube-integration/19664-430/.minikube/addons for local assets ...
	I0919 18:42:14.038692    7893 filesync.go:126] Scanning /home/g528047478195_compute/minikube-integration/19664-430/.minikube/files for local assets ...
	I0919 18:42:14.038747    7893 start.go:296] duration metric: took 143.017317ms for postStartSetup
	I0919 18:42:14.039373    7893 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-189999
	I0919 18:42:14.065343    7893 profile.go:143] Saving config to /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/config.json ...
	I0919 18:42:14.065997    7893 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 18:42:14.066091    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:14.092596    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	I0919 18:42:14.193801    7893 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 18:42:14.201052    7893 start.go:128] duration metric: took 17.487070836s to createHost
	I0919 18:42:14.201086    7893 start.go:83] releasing machines lock for "addons-189999", held for 17.487476098s
	I0919 18:42:14.201232    7893 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-189999
	I0919 18:42:14.228304    7893 ssh_runner.go:195] Run: cat /version.json
	I0919 18:42:14.228398    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:14.228573    7893 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 18:42:14.228695    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:14.269856    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	I0919 18:42:14.271399    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	I0919 18:42:14.382952    7893 ssh_runner.go:195] Run: systemctl --version
	I0919 18:42:14.483600    7893 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 18:42:14.490764    7893 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 18:42:14.531195    7893 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 18:42:14.531431    7893 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 18:42:14.577145    7893 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 18:42:14.577203    7893 start.go:495] detecting cgroup driver to use...
	I0919 18:42:14.577299    7893 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 18:42:14.577515    7893 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 18:42:14.604127    7893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0919 18:42:14.619899    7893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 18:42:14.635541    7893 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 18:42:14.635700    7893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 18:42:14.651662    7893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 18:42:14.667060    7893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 18:42:14.683246    7893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 18:42:14.699195    7893 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 18:42:14.714235    7893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 18:42:14.730069    7893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 18:42:14.745718    7893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 18:42:14.761802    7893 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 18:42:14.776599    7893 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 18:42:14.790962    7893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:42:14.924392    7893 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 18:42:15.045562    7893 start.go:495] detecting cgroup driver to use...
	I0919 18:42:15.045620    7893 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 18:42:15.045730    7893 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 18:42:15.089199    7893 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0919 18:42:15.089312    7893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 18:42:15.115653    7893 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 18:42:15.153335    7893 ssh_runner.go:195] Run: which cri-dockerd
	I0919 18:42:15.161731    7893 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 18:42:15.182207    7893 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0919 18:42:15.223998    7893 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 18:42:15.466789    7893 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 18:42:15.690331    7893 docker.go:574] configuring docker to use "systemd" as cgroup driver...
	I0919 18:42:15.690543    7893 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 18:42:15.722442    7893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:42:15.853641    7893 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 18:42:16.287130    7893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 18:42:16.306113    7893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 18:42:16.325203    7893 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 18:42:16.467945    7893 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 18:42:16.604663    7893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:42:16.736330    7893 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 18:42:16.763124    7893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 18:42:16.781419    7893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:42:16.916288    7893 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 18:42:17.028861    7893 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 18:42:17.029306    7893 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 18:42:17.038442    7893 start.go:563] Will wait 60s for crictl version
	I0919 18:42:17.038575    7893 ssh_runner.go:195] Run: which crictl
	I0919 18:42:17.046251    7893 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 18:42:17.113326    7893 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0919 18:42:17.113440    7893 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 18:42:17.157935    7893 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 18:42:17.202088    7893 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0919 18:42:17.202254    7893 cli_runner.go:164] Run: docker network inspect addons-189999 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 18:42:17.228417    7893 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 18:42:17.234290    7893 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:42:17.256410    7893 out.go:177]   - kubelet.cgroups-per-qos=false
	I0919 18:42:17.259401    7893 out.go:177]   - kubelet.enforce-node-allocatable=""
	I0919 18:42:17.261880    7893 kubeadm.go:883] updating cluster {Name:addons-189999 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-189999 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 18:42:17.262062    7893 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 18:42:17.262202    7893 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 18:42:17.295070    7893 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 18:42:17.295099    7893 docker.go:615] Images already preloaded, skipping extraction
	I0919 18:42:17.295268    7893 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 18:42:17.327022    7893 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 18:42:17.327060    7893 cache_images.go:84] Images are preloaded, skipping loading
	I0919 18:42:17.327078    7893 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0919 18:42:17.327274    7893 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable="" --hostname-override=addons-189999 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-189999 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 18:42:17.327435    7893 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 18:42:17.403132    7893 cni.go:84] Creating CNI manager for ""
	I0919 18:42:17.403176    7893 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 18:42:17.403211    7893 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 18:42:17.403246    7893 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-189999 NodeName:addons-189999 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 18:42:17.403561    7893 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-189999"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 18:42:17.403691    7893 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 18:42:17.418515    7893 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 18:42:17.418633    7893 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 18:42:17.433106    7893 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (366 bytes)
	I0919 18:42:17.464056    7893 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 18:42:17.493375    7893 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0919 18:42:17.522944    7893 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0919 18:42:17.528736    7893 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:42:17.547904    7893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:42:17.687470    7893 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:42:17.717005    7893 certs.go:68] Setting up /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999 for IP: 192.168.49.2
	I0919 18:42:17.717036    7893 certs.go:194] generating shared ca certs ...
	I0919 18:42:17.717062    7893 certs.go:226] acquiring lock for ca certs: {Name:mk134a4b76189455be5d7d18b97e65ca883062c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:42:17.717410    7893 certs.go:240] generating "minikubeCA" ca cert: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/ca.key
	I0919 18:42:17.861713    7893 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19664-430/.minikube/ca.crt ...
	I0919 18:42:17.861750    7893 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19664-430/.minikube/ca.crt: {Name:mk64b412a53f4b8a2429a0a6f2e252846b0f4022 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:42:17.862195    7893 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19664-430/.minikube/ca.key ...
	I0919 18:42:17.862222    7893 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19664-430/.minikube/ca.key: {Name:mk19c7a49ceb58fecc0d297bf14e5147a99e6d15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:42:17.862571    7893 certs.go:240] generating "proxyClientCA" ca cert: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/proxy-client-ca.key
	I0919 18:42:18.068069    7893 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19664-430/.minikube/proxy-client-ca.crt ...
	I0919 18:42:18.068109    7893 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19664-430/.minikube/proxy-client-ca.crt: {Name:mk22d317f9d88d8a1e0800b1952d2fde9fd12f6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:42:18.068618    7893 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19664-430/.minikube/proxy-client-ca.key ...
	I0919 18:42:18.068646    7893 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19664-430/.minikube/proxy-client-ca.key: {Name:mk5440ee251e621f08cae899807560631afe2db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:42:18.068961    7893 certs.go:256] generating profile certs ...
	I0919 18:42:18.069057    7893 certs.go:363] generating signed profile cert for "minikube-user": /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.key
	I0919 18:42:18.069107    7893 crypto.go:68] Generating cert /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt with IP's: []
	I0919 18:42:18.192926    7893 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt ...
	I0919 18:42:18.192966    7893 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt: {Name:mk0cfc4dd79abfc1e864186eb383ee6dff93f3e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:42:18.193423    7893 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.key ...
	I0919 18:42:18.193462    7893 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.key: {Name:mk8d9a72506163eada6751d3e7db394f4c969732 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:42:18.193827    7893 certs.go:363] generating signed profile cert for "minikube": /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/apiserver.key.05c90fb3
	I0919 18:42:18.193910    7893 crypto.go:68] Generating cert /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/apiserver.crt.05c90fb3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0919 18:42:18.468804    7893 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/apiserver.crt.05c90fb3 ...
	I0919 18:42:18.468855    7893 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/apiserver.crt.05c90fb3: {Name:mk4c2c4566176a2a6eb1e568a4cac3eb5e3e8f42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:42:18.469312    7893 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/apiserver.key.05c90fb3 ...
	I0919 18:42:18.469348    7893 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/apiserver.key.05c90fb3: {Name:mkbf024096e13170580449a100cfc34bdf5ee15b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:42:18.469667    7893 certs.go:381] copying /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/apiserver.crt.05c90fb3 -> /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/apiserver.crt
	I0919 18:42:18.469860    7893 certs.go:385] copying /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/apiserver.key.05c90fb3 -> /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/apiserver.key
	I0919 18:42:18.470019    7893 certs.go:363] generating signed profile cert for "aggregator": /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/proxy-client.key
	I0919 18:42:18.470068    7893 crypto.go:68] Generating cert /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/proxy-client.crt with IP's: []
	I0919 18:42:18.656370    7893 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/proxy-client.crt ...
	I0919 18:42:18.656412    7893 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/proxy-client.crt: {Name:mkdb8405718901788744e525cf6041b6aa5ad9df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:42:18.656887    7893 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/proxy-client.key ...
	I0919 18:42:18.656917    7893 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/proxy-client.key: {Name:mk2e44758646b05a30cddc3fdb01047d5a0d4788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:42:18.657579    7893 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 18:42:18.657661    7893 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/certs/ca.pem (1119 bytes)
	I0919 18:42:18.657726    7893 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/certs/cert.pem (1164 bytes)
	I0919 18:42:18.657832    7893 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/certs/key.pem (1675 bytes)
	I0919 18:42:18.658785    7893 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19664-430/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 18:42:18.698527    7893 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19664-430/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 18:42:18.737573    7893 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19664-430/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 18:42:18.776478    7893 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19664-430/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 18:42:18.816262    7893 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 18:42:18.855523    7893 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 18:42:18.894556    7893 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 18:42:18.934279    7893 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 18:42:18.972250    7893 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19664-430/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 18:42:19.011348    7893 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 18:42:19.041708    7893 ssh_runner.go:195] Run: openssl version
	I0919 18:42:19.050646    7893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 18:42:19.067213    7893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:42:19.073687    7893 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:42 /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:42:19.073806    7893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:42:19.084488    7893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 18:42:19.100337    7893 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 18:42:19.106117    7893 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 18:42:19.106193    7893 kubeadm.go:392] StartCluster: {Name:addons-189999 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-189999 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:42:19.106408    7893 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 18:42:19.137410    7893 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 18:42:19.152649    7893 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 18:42:19.169105    7893 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 18:42:19.169223    7893 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 18:42:19.184119    7893 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 18:42:19.184148    7893 kubeadm.go:157] found existing configuration files:
	
	I0919 18:42:19.184264    7893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 18:42:19.199026    7893 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 18:42:19.199145    7893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 18:42:19.213252    7893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 18:42:19.228607    7893 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 18:42:19.228722    7893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 18:42:19.242928    7893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 18:42:19.257547    7893 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 18:42:19.257724    7893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 18:42:19.271866    7893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 18:42:19.291013    7893 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 18:42:19.291146    7893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 18:42:19.310575    7893 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 18:42:19.409328    7893 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0919 18:42:19.409446    7893 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 18:42:19.543093    7893 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 18:42:19.543306    7893 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 18:42:19.543466    7893 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 18:42:19.566294    7893 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 18:42:19.569953    7893 out.go:235]   - Generating certificates and keys ...
	I0919 18:42:19.570098    7893 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 18:42:19.570208    7893 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 18:42:19.834393    7893 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 18:42:20.021654    7893 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 18:42:20.252597    7893 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 18:42:20.448467    7893 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 18:42:20.540009    7893 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 18:42:20.540431    7893 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-189999 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 18:42:20.716971    7893 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 18:42:20.717983    7893 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-189999 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 18:42:21.188112    7893 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 18:42:21.314489    7893 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 18:42:21.485371    7893 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 18:42:21.486008    7893 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 18:42:21.620529    7893 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 18:42:21.793827    7893 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 18:42:22.115703    7893 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 18:42:22.353367    7893 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 18:42:22.708925    7893 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 18:42:22.709927    7893 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 18:42:22.713493    7893 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 18:42:22.716869    7893 out.go:235]   - Booting up control plane ...
	I0919 18:42:22.717040    7893 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 18:42:22.717189    7893 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 18:42:22.718307    7893 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 18:42:22.753501    7893 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 18:42:22.764402    7893 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 18:42:22.764507    7893 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 18:42:22.919466    7893 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 18:42:22.919681    7893 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 18:42:23.424283    7893 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 506.541978ms
	I0919 18:42:23.424485    7893 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0919 18:42:30.925763    7893 kubeadm.go:310] [api-check] The API server is healthy after 7.501877686s
	I0919 18:42:30.942795    7893 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 18:42:30.965933    7893 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 18:42:31.000622    7893 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 18:42:31.000970    7893 kubeadm.go:310] [mark-control-plane] Marking the node addons-189999 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 18:42:31.014921    7893 kubeadm.go:310] [bootstrap-token] Using token: 4eefu3.8g6j795xtorxrk69
	I0919 18:42:31.018378    7893 out.go:235]   - Configuring RBAC rules ...
	I0919 18:42:31.018562    7893 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 18:42:31.023918    7893 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 18:42:31.033502    7893 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 18:42:31.039384    7893 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 18:42:31.044103    7893 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 18:42:31.049048    7893 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 18:42:31.334107    7893 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 18:42:31.851990    7893 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 18:42:32.334528    7893 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 18:42:32.336371    7893 kubeadm.go:310] 
	I0919 18:42:32.336587    7893 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 18:42:32.336635    7893 kubeadm.go:310] 
	I0919 18:42:32.336914    7893 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 18:42:32.336987    7893 kubeadm.go:310] 
	I0919 18:42:32.337089    7893 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 18:42:32.337305    7893 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 18:42:32.337421    7893 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 18:42:32.337435    7893 kubeadm.go:310] 
	I0919 18:42:32.337561    7893 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 18:42:32.337581    7893 kubeadm.go:310] 
	I0919 18:42:32.337690    7893 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 18:42:32.337711    7893 kubeadm.go:310] 
	I0919 18:42:32.337821    7893 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 18:42:32.338007    7893 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 18:42:32.338160    7893 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 18:42:32.338176    7893 kubeadm.go:310] 
	I0919 18:42:32.338372    7893 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 18:42:32.338545    7893 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 18:42:32.338566    7893 kubeadm.go:310] 
	I0919 18:42:32.338755    7893 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4eefu3.8g6j795xtorxrk69 \
	I0919 18:42:32.339003    7893 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:63554f0d2dc4ff4f7ec4c522c4b6d130751755b76978344a0d79010af15d4a2b \
	I0919 18:42:32.339056    7893 kubeadm.go:310] 	--control-plane 
	I0919 18:42:32.339072    7893 kubeadm.go:310] 
	I0919 18:42:32.339274    7893 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 18:42:32.339293    7893 kubeadm.go:310] 
	I0919 18:42:32.339470    7893 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4eefu3.8g6j795xtorxrk69 \
	I0919 18:42:32.339704    7893 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:63554f0d2dc4ff4f7ec4c522c4b6d130751755b76978344a0d79010af15d4a2b 
	I0919 18:42:32.345226    7893 kubeadm.go:310] W0919 18:42:19.403252    1686 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:42:32.345775    7893 kubeadm.go:310] W0919 18:42:19.405546    1686 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:42:32.346032    7893 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 18:42:32.346074    7893 cni.go:84] Creating CNI manager for ""
	I0919 18:42:32.346103    7893 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 18:42:32.349677    7893 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 18:42:32.352825    7893 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 18:42:32.369519    7893 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0919 18:42:32.401411    7893 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 18:42:32.401714    7893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:42:32.401890    7893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-189999 minikube.k8s.io/updated_at=2024_09_19T18_42_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=addons-189999 minikube.k8s.io/primary=true
	I0919 18:42:32.615095    7893 ops.go:34] apiserver oom_adj: -16
	I0919 18:42:32.615272    7893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:42:33.116075    7893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:42:33.615404    7893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:42:34.116191    7893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:42:34.615402    7893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:42:35.115596    7893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:42:35.616276    7893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:42:36.116102    7893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:42:36.615826    7893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:42:36.736619    7893 kubeadm.go:1113] duration metric: took 4.335024298s to wait for elevateKubeSystemPrivileges
	I0919 18:42:36.736662    7893 kubeadm.go:394] duration metric: took 17.630474448s to StartCluster
	I0919 18:42:36.736689    7893 settings.go:142] acquiring lock: {Name:mk0c53db97dd5a31b260fcc8cf8421bbd56d46a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:42:36.737085    7893 settings.go:150] Updating kubeconfig:  /home/g528047478195_compute/minikube-integration/19664-430/kubeconfig
	I0919 18:42:36.737959    7893 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19664-430/kubeconfig: {Name:mk944ce9fed4aaf52b3a6ebd6d70f7b0ba24e11f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:42:36.738394    7893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 18:42:36.738451    7893 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 18:42:36.738786    7893 config.go:182] Loaded profile config "addons-189999": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 18:42:36.738865    7893 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0919 18:42:36.739017    7893 addons.go:69] Setting yakd=true in profile "addons-189999"
	I0919 18:42:36.739048    7893 addons.go:234] Setting addon yakd=true in "addons-189999"
	I0919 18:42:36.739096    7893 host.go:66] Checking if "addons-189999" exists ...
	I0919 18:42:36.739175    7893 addons.go:69] Setting inspektor-gadget=true in profile "addons-189999"
	I0919 18:42:36.739196    7893 addons.go:234] Setting addon inspektor-gadget=true in "addons-189999"
	I0919 18:42:36.739238    7893 host.go:66] Checking if "addons-189999" exists ...
	I0919 18:42:36.740012    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:36.740017    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:36.740949    7893 addons.go:69] Setting metrics-server=true in profile "addons-189999"
	I0919 18:42:36.740978    7893 addons.go:234] Setting addon metrics-server=true in "addons-189999"
	I0919 18:42:36.741020    7893 host.go:66] Checking if "addons-189999" exists ...
	I0919 18:42:36.741085    7893 addons.go:69] Setting cloud-spanner=true in profile "addons-189999"
	I0919 18:42:36.741103    7893 addons.go:234] Setting addon cloud-spanner=true in "addons-189999"
	I0919 18:42:36.741142    7893 host.go:66] Checking if "addons-189999" exists ...
	I0919 18:42:36.741814    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:36.741946    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:36.745176    7893 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-189999"
	I0919 18:42:36.745211    7893 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-189999"
	I0919 18:42:36.745254    7893 host.go:66] Checking if "addons-189999" exists ...
	I0919 18:42:36.746167    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:36.748612    7893 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-189999"
	I0919 18:42:36.748900    7893 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-189999"
	I0919 18:42:36.749074    7893 host.go:66] Checking if "addons-189999" exists ...
	I0919 18:42:36.751194    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:36.760986    7893 addons.go:69] Setting default-storageclass=true in profile "addons-189999"
	I0919 18:42:36.761049    7893 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-189999"
	I0919 18:42:36.761653    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:36.763088    7893 addons.go:69] Setting registry=true in profile "addons-189999"
	I0919 18:42:36.763120    7893 addons.go:234] Setting addon registry=true in "addons-189999"
	I0919 18:42:36.763163    7893 host.go:66] Checking if "addons-189999" exists ...
	I0919 18:42:36.764037    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:36.778185    7893 addons.go:69] Setting gcp-auth=true in profile "addons-189999"
	I0919 18:42:36.778246    7893 mustload.go:65] Loading cluster: addons-189999
	I0919 18:42:36.778605    7893 config.go:182] Loaded profile config "addons-189999": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 18:42:36.779122    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:36.781245    7893 addons.go:69] Setting storage-provisioner=true in profile "addons-189999"
	I0919 18:42:36.781392    7893 addons.go:234] Setting addon storage-provisioner=true in "addons-189999"
	I0919 18:42:36.781499    7893 host.go:66] Checking if "addons-189999" exists ...
	I0919 18:42:36.795032    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:36.806951    7893 addons.go:69] Setting helm-tiller=true in profile "addons-189999"
	I0919 18:42:36.807006    7893 addons.go:234] Setting addon helm-tiller=true in "addons-189999"
	I0919 18:42:36.807087    7893 host.go:66] Checking if "addons-189999" exists ...
	I0919 18:42:36.807903    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:36.816140    7893 addons.go:69] Setting ingress=true in profile "addons-189999"
	I0919 18:42:36.816187    7893 addons.go:234] Setting addon ingress=true in "addons-189999"
	I0919 18:42:36.816254    7893 host.go:66] Checking if "addons-189999" exists ...
	I0919 18:42:36.817113    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:36.820213    7893 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-189999"
	I0919 18:42:36.820248    7893 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-189999"
	I0919 18:42:36.820730    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:36.830773    7893 addons.go:69] Setting ingress-dns=true in profile "addons-189999"
	I0919 18:42:36.830823    7893 addons.go:234] Setting addon ingress-dns=true in "addons-189999"
	I0919 18:42:36.830898    7893 host.go:66] Checking if "addons-189999" exists ...
	I0919 18:42:36.831705    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:36.841019    7893 addons.go:69] Setting volcano=true in profile "addons-189999"
	I0919 18:42:36.841062    7893 addons.go:234] Setting addon volcano=true in "addons-189999"
	I0919 18:42:36.841117    7893 host.go:66] Checking if "addons-189999" exists ...
	I0919 18:42:36.842101    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:36.863273    7893 addons.go:69] Setting volumesnapshots=true in profile "addons-189999"
	I0919 18:42:36.863313    7893 addons.go:234] Setting addon volumesnapshots=true in "addons-189999"
	I0919 18:42:36.863362    7893 host.go:66] Checking if "addons-189999" exists ...
	I0919 18:42:36.864311    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:36.872135    7893 out.go:177] * Verifying Kubernetes components...
	I0919 18:42:37.051326    7893 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0919 18:42:37.056870    7893 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 18:42:37.057010    7893 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 18:42:37.057190    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:37.082496    7893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:42:37.124535    7893 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0919 18:42:37.134799    7893 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0919 18:42:37.134961    7893 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0919 18:42:37.135114    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:37.214111    7893 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0919 18:42:37.217242    7893 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0919 18:42:37.217486    7893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0919 18:42:37.217820    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:37.221471    7893 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0919 18:42:37.242442    7893 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0919 18:42:37.242920    7893 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0919 18:42:37.244444    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:37.316292    7893 host.go:66] Checking if "addons-189999" exists ...
	I0919 18:42:37.362794    7893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 18:42:37.368919    7893 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0919 18:42:37.373306    7893 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:42:37.373414    7893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0919 18:42:37.373594    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:37.398948    7893 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0919 18:42:37.406491    7893 out.go:177]   - Using image docker.io/registry:2.8.3
	I0919 18:42:37.409486    7893 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0919 18:42:37.409609    7893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0919 18:42:37.409787    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:37.442758    7893 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 18:42:37.448061    7893 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:42:37.448094    7893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 18:42:37.448214    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:37.483782    7893 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0919 18:42:37.490192    7893 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0919 18:42:37.490227    7893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0919 18:42:37.490391    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:37.500539    7893 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-189999"
	I0919 18:42:37.500633    7893 host.go:66] Checking if "addons-189999" exists ...
	I0919 18:42:37.506803    7893 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0919 18:42:37.507904    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:37.538479    7893 addons.go:234] Setting addon default-storageclass=true in "addons-189999"
	I0919 18:42:37.538623    7893 host.go:66] Checking if "addons-189999" exists ...
	I0919 18:42:37.539539    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:37.552632    7893 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0919 18:42:37.552786    7893 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:42:37.552831    7893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0919 18:42:37.553045    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:37.554154    7893 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0919 18:42:37.579200    7893 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0919 18:42:37.588564    7893 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0919 18:42:37.592007    7893 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0919 18:42:37.592100    7893 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:42:37.592132    7893 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0919 18:42:37.596609    7893 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0919 18:42:37.599107    7893 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:42:37.599340    7893 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0919 18:42:37.599143    7893 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0919 18:42:37.602414    7893 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:42:37.602438    7893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0919 18:42:37.602538    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:37.615127    7893 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0919 18:42:37.615242    7893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0919 18:42:37.615455    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:37.671252    7893 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0919 18:42:37.674424    7893 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0919 18:42:37.677960    7893 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0919 18:42:37.680535    7893 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0919 18:42:37.680630    7893 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0919 18:42:37.680774    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:37.709446    7893 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0919 18:42:37.712521    7893 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0919 18:42:37.712634    7893 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0919 18:42:37.712778    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:37.791347    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	I0919 18:42:37.846075    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	I0919 18:42:37.862946    7893 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:42:37.875949    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	I0919 18:42:37.906418    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	I0919 18:42:37.915113    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	I0919 18:42:38.046657    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	I0919 18:42:38.064620    7893 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 18:42:38.064731    7893 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 18:42:38.064920    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:38.079706    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	I0919 18:42:38.167249    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	I0919 18:42:38.194105    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	I0919 18:42:38.202660    7893 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0919 18:42:38.205752    7893 out.go:177]   - Using image docker.io/busybox:stable
	I0919 18:42:38.208688    7893 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:42:38.208726    7893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0919 18:42:38.208826    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:38.220959    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	I0919 18:42:38.222498    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	W0919 18:42:38.228087    7893 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0919 18:42:38.228129    7893 retry.go:31] will retry after 150.551547ms: ssh: handshake failed: EOF
	I0919 18:42:38.242967    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	W0919 18:42:38.248176    7893 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0919 18:42:38.248215    7893 retry.go:31] will retry after 159.15949ms: ssh: handshake failed: EOF
	I0919 18:42:38.265077    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	I0919 18:42:38.330338    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	I0919 18:42:38.361110    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	W0919 18:42:38.381098    7893 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0919 18:42:38.381133    7893 retry.go:31] will retry after 214.918477ms: ssh: handshake failed: EOF
	W0919 18:42:38.409499    7893 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0919 18:42:38.409538    7893 retry.go:31] will retry after 232.95706ms: ssh: handshake failed: EOF
	I0919 18:42:38.897986    7893 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 18:42:38.898016    7893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0919 18:42:39.100019    7893 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0919 18:42:39.100056    7893 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0919 18:42:39.165830    7893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0919 18:42:39.211084    7893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:42:39.225393    7893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0919 18:42:39.245094    7893 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0919 18:42:39.245127    7893 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0919 18:42:39.296081    7893 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0919 18:42:39.296116    7893 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0919 18:42:39.354419    7893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:42:39.455983    7893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:42:39.478598    7893 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 18:42:39.478635    7893 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 18:42:39.642760    7893 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0919 18:42:39.642799    7893 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0919 18:42:39.716658    7893 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0919 18:42:39.716711    7893 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0919 18:42:39.727663    7893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:42:39.788226    7893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 18:42:39.934234    7893 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0919 18:42:39.934291    7893 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0919 18:42:39.969306    7893 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:42:39.969336    7893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0919 18:42:39.974817    7893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:42:40.018099    7893 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0919 18:42:40.018129    7893 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0919 18:42:40.024689    7893 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0919 18:42:40.024719    7893 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0919 18:42:40.025302    7893 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:42:40.025322    7893 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 18:42:40.117105    7893 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0919 18:42:40.117138    7893 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0919 18:42:40.298709    7893 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0919 18:42:40.298740    7893 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0919 18:42:40.392507    7893 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0919 18:42:40.392538    7893 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0919 18:42:40.441684    7893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:42:40.510303    7893 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0919 18:42:40.510332    7893 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0919 18:42:40.520510    7893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:42:40.542926    7893 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0919 18:42:40.542959    7893 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0919 18:42:40.610325    7893 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0919 18:42:40.610362    7893 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0919 18:42:40.922689    7893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0919 18:42:41.043085    7893 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0919 18:42:41.043124    7893 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0919 18:42:41.051941    7893 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:42:41.051977    7893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0919 18:42:41.099004    7893 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0919 18:42:41.099039    7893 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0919 18:42:41.118268    7893 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0919 18:42:41.118298    7893 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0919 18:42:41.166154    7893 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.803148222s)
	I0919 18:42:41.166192    7893 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 18:42:41.168024    7893 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.30496487s)
	I0919 18:42:41.169390    7893 node_ready.go:35] waiting up to 6m0s for node "addons-189999" to be "Ready" ...
	I0919 18:42:41.295138    7893 node_ready.go:49] node "addons-189999" has status "Ready":"True"
	I0919 18:42:41.295291    7893 node_ready.go:38] duration metric: took 125.869432ms for node "addons-189999" to be "Ready" ...
	I0919 18:42:41.295358    7893 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:42:41.549593    7893 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0919 18:42:41.549624    7893 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0919 18:42:41.608037    7893 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6w8rx" in "kube-system" namespace to be "Ready" ...
	I0919 18:42:41.625644    7893 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0919 18:42:41.625684    7893 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0919 18:42:41.660392    7893 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0919 18:42:41.660425    7893 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0919 18:42:41.684756    7893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:42:41.911292    7893 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-189999" context rescaled to 1 replicas
	I0919 18:42:41.976620    7893 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0919 18:42:41.976655    7893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0919 18:42:42.174306    7893 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:42:42.174335    7893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0919 18:42:42.191424    7893 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0919 18:42:42.191457    7893 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0919 18:42:42.462645    7893 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0919 18:42:42.462699    7893 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0919 18:42:42.651660    7893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:42:42.656134    7893 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:42:42.656166    7893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0919 18:42:42.864806    7893 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0919 18:42:42.864857    7893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0919 18:42:43.129762    7893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:42:43.688253    7893 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0919 18:42:43.688289    7893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0919 18:42:44.115711    7893 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:42:44.115754    7893 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0919 18:42:44.239941    7893 pod_ready.go:103] pod "coredns-7c65d6cfc9-6w8rx" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:44.601441    7893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:42:46.233801    7893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.067898309s)
	I0919 18:42:46.233910    7893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.022792432s)
	I0919 18:42:47.144313    7893 pod_ready.go:103] pod "coredns-7c65d6cfc9-6w8rx" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:49.351043    7893 pod_ready.go:103] pod "coredns-7c65d6cfc9-6w8rx" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:49.801880    7893 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0919 18:42:49.802040    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:49.859099    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	I0919 18:42:50.078652    7893 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0919 18:42:50.145472    7893 addons.go:234] Setting addon gcp-auth=true in "addons-189999"
	I0919 18:42:50.145544    7893 host.go:66] Checking if "addons-189999" exists ...
	I0919 18:42:50.146428    7893 cli_runner.go:164] Run: docker container inspect addons-189999 --format={{.State.Status}}
	I0919 18:42:50.218412    7893 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0919 18:42:50.218828    7893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-189999
	I0919 18:42:50.279677    7893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/addons-189999/id_rsa Username:docker}
	I0919 18:42:51.552927    7893 pod_ready.go:103] pod "coredns-7c65d6cfc9-6w8rx" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:53.854275    7893 pod_ready.go:103] pod "coredns-7c65d6cfc9-6w8rx" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:56.308728    7893 pod_ready.go:103] pod "coredns-7c65d6cfc9-6w8rx" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:58.580973    7893 pod_ready.go:103] pod "coredns-7c65d6cfc9-6w8rx" in "kube-system" namespace has status "Ready":"False"
	I0919 18:43:01.726831    7893 pod_ready.go:103] pod "coredns-7c65d6cfc9-6w8rx" in "kube-system" namespace has status "Ready":"False"
	I0919 18:43:04.238134    7893 pod_ready.go:103] pod "coredns-7c65d6cfc9-6w8rx" in "kube-system" namespace has status "Ready":"False"
	I0919 18:43:06.223702    7893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (26.998154191s)
	I0919 18:43:06.223773    7893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (26.869318702s)
	I0919 18:43:06.223997    7893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (26.76798399s)
	I0919 18:43:06.224162    7893 addons.go:475] Verifying addon ingress=true in "addons-189999"
	I0919 18:43:06.224171    7893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (26.435913581s)
	I0919 18:43:06.224510    7893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (26.249657117s)
	I0919 18:43:06.224634    7893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (25.782919005s)
	I0919 18:43:06.224653    7893 addons.go:475] Verifying addon metrics-server=true in "addons-189999"
	I0919 18:43:06.224715    7893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (25.704159104s)
	I0919 18:43:06.224728    7893 addons.go:475] Verifying addon registry=true in "addons-189999"
	I0919 18:43:06.224131    7893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (26.496434796s)
	I0919 18:43:06.225190    7893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (25.302458594s)
	I0919 18:43:06.225281    7893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (24.540477997s)
	I0919 18:43:06.225818    7893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (23.574115801s)
	W0919 18:43:06.226335    7893 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:43:06.226363    7893 retry.go:31] will retry after 195.3714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:43:06.225939    7893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (23.096139889s)
	I0919 18:43:06.230003    7893 out.go:177] * Verifying ingress addon...
	I0919 18:43:06.242645    7893 out.go:177] * Verifying registry addon...
	I0919 18:43:06.244146    7893 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0919 18:43:06.242644    7893 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-189999 service yakd-dashboard -n yakd-dashboard
	
	I0919 18:43:06.251302    7893 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0919 18:43:06.422157    7893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:43:06.523503    7893 pod_ready.go:103] pod "coredns-7c65d6cfc9-6w8rx" in "kube-system" namespace has status "Ready":"False"
	I0919 18:43:06.542226    7893 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0919 18:43:06.542272    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:06.578035    7893 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 18:43:06.578090    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0919 18:43:07.744185    7893 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0919 18:43:07.747347    7893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (23.14583791s)
	I0919 18:43:07.747388    7893 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-189999"
	I0919 18:43:07.747681    7893 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (17.529224336s)
	I0919 18:43:07.750769    7893 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:43:07.750885    7893 out.go:177] * Verifying csi-hostpath-driver addon...
	I0919 18:43:07.753397    7893 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0919 18:43:07.754883    7893 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0919 18:43:07.756971    7893 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0919 18:43:07.757000    7893 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0919 18:43:07.842271    7893 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0919 18:43:07.842386    7893 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0919 18:43:07.934246    7893 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:43:07.934361    7893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0919 18:43:08.013564    7893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:43:08.073521    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:08.076090    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:08.525937    7893 pod_ready.go:103] pod "coredns-7c65d6cfc9-6w8rx" in "kube-system" namespace has status "Ready":"False"
	I0919 18:43:08.581179    7893 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:43:08.581285    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:08.582751    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:08.584914    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:08.904099    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:08.925570    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:08.927254    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:09.051051    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:09.053661    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:09.053793    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:09.820714    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:09.839807    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:09.954085    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:10.022461    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:10.023511    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:10.294807    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:10.411947    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:10.413805    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:10.546049    7893 pod_ready.go:103] pod "coredns-7c65d6cfc9-6w8rx" in "kube-system" namespace has status "Ready":"False"
	I0919 18:43:10.565612    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:10.873481    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:10.942931    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:10.946017    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:11.285157    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:11.384645    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:11.386809    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:11.597361    7893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.175045499s)
	I0919 18:43:11.597593    7893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.583910453s)
	I0919 18:43:11.603532    7893 addons.go:475] Verifying addon gcp-auth=true in "addons-189999"
	I0919 18:43:11.607623    7893 out.go:177] * Verifying gcp-auth addon...
	I0919 18:43:11.615109    7893 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0919 18:43:11.640125    7893 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 18:43:11.752498    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:11.757389    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:11.762668    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:12.278381    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:12.280044    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:12.281327    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:12.621242    7893 pod_ready.go:103] pod "coredns-7c65d6cfc9-6w8rx" in "kube-system" namespace has status "Ready":"False"
	I0919 18:43:12.751011    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:12.756951    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:12.761643    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:13.324182    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:13.327686    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:13.329461    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:13.764462    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:13.774885    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:13.775622    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:14.289835    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:14.291097    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:14.293538    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:14.754424    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:14.761430    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:14.769685    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:15.120258    7893 pod_ready.go:103] pod "coredns-7c65d6cfc9-6w8rx" in "kube-system" namespace has status "Ready":"False"
	I0919 18:43:15.256582    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:15.270714    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:15.274771    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:15.870643    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:15.907832    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:15.908496    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:16.453446    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:16.459604    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:16.462859    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:16.752521    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:16.763591    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:16.769562    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:17.125603    7893 pod_ready.go:103] pod "coredns-7c65d6cfc9-6w8rx" in "kube-system" namespace has status "Ready":"False"
	I0919 18:43:17.258604    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:17.262680    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:17.265675    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:17.754781    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:17.765953    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:17.767686    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:18.280469    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:18.284147    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:18.286117    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:18.752887    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:18.761610    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:18.768243    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:19.131527    7893 pod_ready.go:103] pod "coredns-7c65d6cfc9-6w8rx" in "kube-system" namespace has status "Ready":"False"
	I0919 18:43:19.252800    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:19.258798    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:19.265473    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:19.751927    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:19.762584    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:19.767723    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:20.140450    7893 pod_ready.go:93] pod "coredns-7c65d6cfc9-6w8rx" in "kube-system" namespace has status "Ready":"True"
	I0919 18:43:20.140586    7893 pod_ready.go:82] duration metric: took 38.532417509s for pod "coredns-7c65d6cfc9-6w8rx" in "kube-system" namespace to be "Ready" ...
	I0919 18:43:20.140651    7893 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-swlc6" in "kube-system" namespace to be "Ready" ...
	I0919 18:43:20.146329    7893 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-swlc6" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-swlc6" not found
	I0919 18:43:20.146432    7893 pod_ready.go:82] duration metric: took 5.735317ms for pod "coredns-7c65d6cfc9-swlc6" in "kube-system" namespace to be "Ready" ...
	E0919 18:43:20.146471    7893 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-swlc6" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-swlc6" not found
	I0919 18:43:20.146537    7893 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-189999" in "kube-system" namespace to be "Ready" ...
	I0919 18:43:20.160192    7893 pod_ready.go:93] pod "etcd-addons-189999" in "kube-system" namespace has status "Ready":"True"
	I0919 18:43:20.160297    7893 pod_ready.go:82] duration metric: took 13.718723ms for pod "etcd-addons-189999" in "kube-system" namespace to be "Ready" ...
	I0919 18:43:20.160363    7893 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-189999" in "kube-system" namespace to be "Ready" ...
	I0919 18:43:20.182465    7893 pod_ready.go:93] pod "kube-apiserver-addons-189999" in "kube-system" namespace has status "Ready":"True"
	I0919 18:43:20.182641    7893 pod_ready.go:82] duration metric: took 22.23047ms for pod "kube-apiserver-addons-189999" in "kube-system" namespace to be "Ready" ...
	I0919 18:43:20.182708    7893 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-189999" in "kube-system" namespace to be "Ready" ...
	I0919 18:43:20.199297    7893 pod_ready.go:93] pod "kube-controller-manager-addons-189999" in "kube-system" namespace has status "Ready":"True"
	I0919 18:43:20.199404    7893 pod_ready.go:82] duration metric: took 16.654042ms for pod "kube-controller-manager-addons-189999" in "kube-system" namespace to be "Ready" ...
	I0919 18:43:20.199466    7893 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zhhkx" in "kube-system" namespace to be "Ready" ...
	I0919 18:43:20.267973    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:20.270060    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:20.272810    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:20.313638    7893 pod_ready.go:93] pod "kube-proxy-zhhkx" in "kube-system" namespace has status "Ready":"True"
	I0919 18:43:20.313669    7893 pod_ready.go:82] duration metric: took 114.16101ms for pod "kube-proxy-zhhkx" in "kube-system" namespace to be "Ready" ...
	I0919 18:43:20.313685    7893 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-189999" in "kube-system" namespace to be "Ready" ...
	I0919 18:43:20.716126    7893 pod_ready.go:93] pod "kube-scheduler-addons-189999" in "kube-system" namespace has status "Ready":"True"
	I0919 18:43:20.716158    7893 pod_ready.go:82] duration metric: took 402.460241ms for pod "kube-scheduler-addons-189999" in "kube-system" namespace to be "Ready" ...
	I0919 18:43:20.716171    7893 pod_ready.go:39] duration metric: took 39.420315791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:43:20.716202    7893 api_server.go:52] waiting for apiserver process to appear ...
	I0919 18:43:20.716329    7893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:43:20.797328    7893 api_server.go:72] duration metric: took 44.058838549s to wait for apiserver process to appear ...
	I0919 18:43:20.797365    7893 api_server.go:88] waiting for apiserver healthz status ...
	I0919 18:43:20.797403    7893 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 18:43:20.809089    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:20.810034    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:20.811886    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:20.815781    7893 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 18:43:20.822253    7893 api_server.go:141] control plane version: v1.31.1
	I0919 18:43:20.822290    7893 api_server.go:131] duration metric: took 24.911192ms to wait for apiserver health ...
	I0919 18:43:20.822305    7893 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 18:43:20.992905    7893 system_pods.go:59] 18 kube-system pods found
	I0919 18:43:20.992959    7893 system_pods.go:61] "coredns-7c65d6cfc9-6w8rx" [a644b595-7ddc-4542-ab55-c18231ba7f4f] Running
	I0919 18:43:20.992977    7893 system_pods.go:61] "csi-hostpath-attacher-0" [f473baa5-d88c-4a45-b32c-10a432789c88] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0919 18:43:20.992989    7893 system_pods.go:61] "csi-hostpath-resizer-0" [c65e9389-3cf8-4fe5-bb74-06571a36a15d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0919 18:43:20.993006    7893 system_pods.go:61] "csi-hostpathplugin-55dg6" [360b119c-75fe-4868-afa2-cb421d62b155] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0919 18:43:20.993019    7893 system_pods.go:61] "etcd-addons-189999" [f79fe604-44dd-4661-a742-975d2a938642] Running
	I0919 18:43:20.993028    7893 system_pods.go:61] "kube-apiserver-addons-189999" [f6ceb49d-298f-4858-8be7-2e94ad646ea4] Running
	I0919 18:43:20.993038    7893 system_pods.go:61] "kube-controller-manager-addons-189999" [6055e09f-ab12-41e5-ab25-0af77b1ea3af] Running
	I0919 18:43:20.993071    7893 system_pods.go:61] "kube-ingress-dns-minikube" [ce22ee30-c7af-4745-8e96-6352f207f390] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0919 18:43:20.993090    7893 system_pods.go:61] "kube-proxy-zhhkx" [e99b5610-7a86-4044-bb10-949694810512] Running
	I0919 18:43:20.993098    7893 system_pods.go:61] "kube-scheduler-addons-189999" [2c63b986-2937-4e2b-b006-6d0fcb146217] Running
	I0919 18:43:20.993150    7893 system_pods.go:61] "metrics-server-84c5f94fbc-4p8l8" [f09b1fc6-b6de-47ef-834c-4d9b2f35aff0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:43:20.993171    7893 system_pods.go:61] "nvidia-device-plugin-daemonset-r9fh5" [5f43e6be-7391-496e-9c51-545bfef3ed7f] Running
	I0919 18:43:20.993181    7893 system_pods.go:61] "registry-66c9cd494c-dvhvr" [4659f1bd-f229-47b9-8db3-7f0ad80e4e86] Running
	I0919 18:43:20.993192    7893 system_pods.go:61] "registry-proxy-7c4lm" [0c2f5817-bdcc-4c04-b033-af85ade76356] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0919 18:43:20.993205    7893 system_pods.go:61] "snapshot-controller-56fcc65765-vhpxh" [dc0a28c9-3c3e-4295-b9d2-9f8bda693053] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 18:43:20.993224    7893 system_pods.go:61] "snapshot-controller-56fcc65765-wsj9j" [53553a08-7335-44cf-998f-ce110f91a5a7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 18:43:20.993233    7893 system_pods.go:61] "storage-provisioner" [2e0f0e92-05fb-43c0-b23a-3dcc46705d23] Running
	I0919 18:43:20.993252    7893 system_pods.go:61] "tiller-deploy-b48cc5f79-lxzbk" [0c431a1d-0059-4b8e-897a-3feab83def78] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0919 18:43:20.993271    7893 system_pods.go:74] duration metric: took 170.954234ms to wait for pod list to return data ...
	I0919 18:43:20.993288    7893 default_sa.go:34] waiting for default service account to be created ...
	I0919 18:43:21.114028    7893 default_sa.go:45] found service account: "default"
	I0919 18:43:21.114148    7893 default_sa.go:55] duration metric: took 120.845229ms for default service account to be created ...
	I0919 18:43:21.114215    7893 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 18:43:21.255218    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:21.256607    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:21.264062    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:21.337704    7893 system_pods.go:86] 18 kube-system pods found
	I0919 18:43:21.337751    7893 system_pods.go:89] "coredns-7c65d6cfc9-6w8rx" [a644b595-7ddc-4542-ab55-c18231ba7f4f] Running
	I0919 18:43:21.337766    7893 system_pods.go:89] "csi-hostpath-attacher-0" [f473baa5-d88c-4a45-b32c-10a432789c88] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0919 18:43:21.337787    7893 system_pods.go:89] "csi-hostpath-resizer-0" [c65e9389-3cf8-4fe5-bb74-06571a36a15d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0919 18:43:21.337814    7893 system_pods.go:89] "csi-hostpathplugin-55dg6" [360b119c-75fe-4868-afa2-cb421d62b155] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0919 18:43:21.337827    7893 system_pods.go:89] "etcd-addons-189999" [f79fe604-44dd-4661-a742-975d2a938642] Running
	I0919 18:43:21.337856    7893 system_pods.go:89] "kube-apiserver-addons-189999" [f6ceb49d-298f-4858-8be7-2e94ad646ea4] Running
	I0919 18:43:21.337940    7893 system_pods.go:89] "kube-controller-manager-addons-189999" [6055e09f-ab12-41e5-ab25-0af77b1ea3af] Running
	I0919 18:43:21.337961    7893 system_pods.go:89] "kube-ingress-dns-minikube" [ce22ee30-c7af-4745-8e96-6352f207f390] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0919 18:43:21.337969    7893 system_pods.go:89] "kube-proxy-zhhkx" [e99b5610-7a86-4044-bb10-949694810512] Running
	I0919 18:43:21.337980    7893 system_pods.go:89] "kube-scheduler-addons-189999" [2c63b986-2937-4e2b-b006-6d0fcb146217] Running
	I0919 18:43:21.337993    7893 system_pods.go:89] "metrics-server-84c5f94fbc-4p8l8" [f09b1fc6-b6de-47ef-834c-4d9b2f35aff0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:43:21.338006    7893 system_pods.go:89] "nvidia-device-plugin-daemonset-r9fh5" [5f43e6be-7391-496e-9c51-545bfef3ed7f] Running
	I0919 18:43:21.338017    7893 system_pods.go:89] "registry-66c9cd494c-dvhvr" [4659f1bd-f229-47b9-8db3-7f0ad80e4e86] Running
	I0919 18:43:21.338037    7893 system_pods.go:89] "registry-proxy-7c4lm" [0c2f5817-bdcc-4c04-b033-af85ade76356] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0919 18:43:21.338058    7893 system_pods.go:89] "snapshot-controller-56fcc65765-vhpxh" [dc0a28c9-3c3e-4295-b9d2-9f8bda693053] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 18:43:21.338077    7893 system_pods.go:89] "snapshot-controller-56fcc65765-wsj9j" [53553a08-7335-44cf-998f-ce110f91a5a7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 18:43:21.338086    7893 system_pods.go:89] "storage-provisioner" [2e0f0e92-05fb-43c0-b23a-3dcc46705d23] Running
	I0919 18:43:21.338097    7893 system_pods.go:89] "tiller-deploy-b48cc5f79-lxzbk" [0c431a1d-0059-4b8e-897a-3feab83def78] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0919 18:43:21.338113    7893 system_pods.go:126] duration metric: took 223.8537ms to wait for k8s-apps to be running ...
	I0919 18:43:21.338134    7893 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 18:43:21.338251    7893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:43:21.369167    7893 system_svc.go:56] duration metric: took 31.022052ms WaitForService to wait for kubelet
	I0919 18:43:21.369258    7893 kubeadm.go:582] duration metric: took 44.630748797s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:43:21.369302    7893 node_conditions.go:102] verifying NodePressure condition ...
	I0919 18:43:21.534500    7893 node_conditions.go:122] node storage ephemeral capacity is 119475748Ki
	I0919 18:43:21.534615    7893 node_conditions.go:123] node cpu capacity is 2
	I0919 18:43:21.534660    7893 node_conditions.go:105] duration metric: took 165.340465ms to run NodePressure ...
	I0919 18:43:21.534744    7893 start.go:241] waiting for startup goroutines ...
	I0919 18:43:21.784663    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:21.785797    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:21.789795    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:22.252520    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:22.271294    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:22.273137    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:22.752070    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:22.763610    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:22.770694    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:23.254332    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:23.263782    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:23.268310    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:23.758996    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:23.763534    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:23.767399    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:24.253784    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:24.258778    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:24.264452    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:24.760473    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:24.762054    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:24.766808    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:25.269890    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:25.291426    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:25.305170    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:25.758526    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:25.764199    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:25.766458    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:26.254625    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:26.263545    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:26.267148    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:26.750083    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:26.756230    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:26.760428    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:27.250567    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:27.255624    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:27.260383    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:27.755528    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:27.767568    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:27.770980    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:28.278058    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:28.294512    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:28.304055    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:28.755819    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:28.759565    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:28.764014    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:29.264481    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:29.268302    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:29.272823    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:29.753427    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:29.759332    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:29.767794    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:30.252542    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:30.262031    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:30.269080    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:30.800569    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:30.821474    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:30.825106    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:31.278892    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:31.280422    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:31.283000    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:31.760563    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:31.774886    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:31.782624    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:32.261652    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:32.262253    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:32.272491    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:32.770989    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:32.792756    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:32.793770    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:33.251682    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:33.265199    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:33.271993    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:33.749927    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:33.755223    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:33.760119    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:34.333621    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:34.337291    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:34.340572    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:34.753402    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:34.755877    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:43:34.763704    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:35.251527    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:35.272070    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:35.273265    7893 kapi.go:107] duration metric: took 29.021965098s to wait for kubernetes.io/minikube-addons=registry ...
	I0919 18:43:35.785755    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:35.791739    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:36.253129    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:36.261643    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:36.752249    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:36.768838    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:37.263441    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:37.270190    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:37.753473    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:37.767334    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:38.320362    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:38.320528    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:38.763755    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:38.785623    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:39.265686    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:39.267058    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:39.766525    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:39.768580    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:40.255325    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:40.275969    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:40.801501    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:40.805921    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:41.494644    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:41.500181    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:42.017204    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:42.017298    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:42.256466    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:42.267335    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:42.768962    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:42.771058    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:43.253042    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:43.278476    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:43.774523    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:43.781216    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:44.259234    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:44.267464    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:44.755777    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:44.761441    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:45.261894    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:45.265875    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:45.752033    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:45.761155    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:46.256093    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:46.267517    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:47.189328    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:47.191760    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:47.249398    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:47.261785    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:47.753315    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:47.761910    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:48.277591    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:48.299284    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:48.752621    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:48.779343    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:49.258734    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:49.285487    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:49.768745    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:49.768692    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:50.568620    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:50.571333    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:50.969788    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:50.970834    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:51.259464    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:51.263824    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:51.780290    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:51.786834    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:52.254215    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:52.264438    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:52.783863    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:52.810879    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:53.280097    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:53.280538    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:53.755780    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:53.766000    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:54.279139    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:54.281640    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:54.787480    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:54.789684    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:55.286282    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:55.288883    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:55.751203    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:55.766421    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:56.255531    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:56.282390    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:56.791471    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:56.805988    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:57.254249    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:57.262896    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:57.759103    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:57.782375    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:58.250856    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:58.261068    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:58.751386    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:58.760650    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:59.258116    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:59.265918    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:43:59.790096    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:59.822042    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:00.266976    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:00.271722    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:00.872726    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:00.884707    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:01.250469    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:01.262543    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:01.763255    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:01.771443    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:02.252163    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:02.266696    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:02.753005    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:02.764329    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:03.268905    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:03.269808    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:03.752919    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:03.768923    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:04.249522    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:04.262391    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:04.751124    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:04.762667    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:05.261404    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:05.266178    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:05.753151    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:05.765315    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:06.253390    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:06.263134    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:06.760402    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:06.776472    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:07.252691    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:07.275692    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:07.751645    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:07.771510    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:08.263592    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:08.282383    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:08.765191    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:08.777538    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:09.282672    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:09.311607    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:09.788393    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:09.801773    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:10.271487    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:10.276658    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:10.807025    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:10.899951    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:11.251151    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:11.285739    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:12.038125    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:12.038236    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:12.257665    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:12.261900    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:12.759377    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:12.764360    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:13.266017    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:13.281878    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:13.757651    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:13.768891    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:14.253153    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:14.263081    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:14.757694    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:14.776215    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:15.254473    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:15.262530    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:15.755043    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:15.766186    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:16.288072    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:16.289864    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:16.756001    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:16.766159    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:17.251291    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:17.264569    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:17.787905    7893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:44:17.789503    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:18.259395    7893 kapi.go:107] duration metric: took 1m12.015243668s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0919 18:44:18.271995    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:18.768481    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:19.268204    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:19.775810    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:20.261368    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:20.776282    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:21.265817    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:21.773042    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:22.290624    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:22.764713    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:23.327484    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:23.761361    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:24.281861    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:24.767512    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:25.270470    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:44:25.772363    7893 kapi.go:107] duration metric: took 1m18.017476264s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0919 18:44:33.625303    7893 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 18:44:33.625358    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:34.120778    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:34.623889    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:35.120469    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:35.620373    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:36.120700    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:36.619700    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:37.119859    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:37.620126    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:38.127019    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:38.619593    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:39.119972    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:39.619473    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:40.120659    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:40.620121    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:41.120272    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:41.619669    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:42.121163    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:42.619583    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:43.121166    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:43.620289    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:44.122029    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:44.620735    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:45.119767    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:45.619976    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:46.120130    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:46.620459    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:47.119172    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:47.619144    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:48.120691    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:48.619092    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:49.120296    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:49.619681    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:50.120105    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:50.619202    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:51.121121    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:51.620263    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:52.120434    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:52.619448    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:53.119503    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:53.619909    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:54.120375    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:54.619612    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:55.120501    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:55.619707    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:56.120703    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:56.620055    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:57.120236    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:57.619169    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:58.120814    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:58.620256    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:59.120386    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:59.619482    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:00.120322    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:00.620755    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:01.126205    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:01.649069    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:02.129713    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:02.626480    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:03.120009    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:03.620325    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:04.120290    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:04.620547    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:05.121150    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:05.628638    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:06.119805    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:06.619884    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:07.119894    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:07.624411    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:08.120767    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:08.619374    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:09.119815    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:09.621219    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:10.120863    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:10.619499    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:11.120175    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:11.620674    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:12.124699    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:12.619803    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:13.120185    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:13.620045    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:14.122435    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:14.621945    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:15.133987    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:15.628419    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:16.120503    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:16.620819    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:17.119482    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:17.622882    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:18.120241    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:18.620479    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:19.120487    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:19.620224    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:20.120585    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:20.619457    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:21.120782    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:21.619735    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:22.127100    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:22.620297    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:23.119888    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:23.619302    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:24.119837    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:24.620094    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:25.120944    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:25.621477    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:26.121145    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:26.620808    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:27.119828    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:27.622821    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:28.120082    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:28.620661    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:29.119590    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:29.619978    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:30.119482    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:30.620668    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:31.120270    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:31.621024    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:32.121757    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:32.620018    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:33.131732    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:33.619471    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:34.120041    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:34.620917    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:35.119873    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:35.620443    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:36.119539    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:36.619992    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:37.119311    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:37.620372    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:38.126584    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:38.622434    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:39.123808    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:39.621021    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:40.121182    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:40.621230    7893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:45:41.131655    7893 kapi.go:107] duration metric: took 2m29.516544381s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0919 18:45:41.135317    7893 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-189999 cluster.
	I0919 18:45:41.138379    7893 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0919 18:45:41.142197    7893 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0919 18:45:41.145488    7893 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, volcano, ingress-dns, storage-provisioner, metrics-server, helm-tiller, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0919 18:45:41.148634    7893 addons.go:510] duration metric: took 3m4.409760439s for enable addons: enabled=[cloud-spanner nvidia-device-plugin volcano ingress-dns storage-provisioner metrics-server helm-tiller inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0919 18:45:41.148744    7893 start.go:246] waiting for cluster config update ...
	I0919 18:45:41.148814    7893 start.go:255] writing updated cluster config ...
	I0919 18:45:41.149453    7893 ssh_runner.go:195] Run: rm -f paused
	I0919 18:45:41.640574    7893 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0919 18:45:41.644291    7893 out.go:177] * Done! kubectl is now configured to use "addons-189999" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 19 18:55:16 addons-189999 dockerd[1160]: time="2024-09-19T18:55:16.384461751Z" level=info msg="ignoring event" container=31da15f51bb91f90f338a51c1204e37cecb60dd87ecd09b7c9d5a3c9e284c1d9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:55:16 addons-189999 dockerd[1160]: time="2024-09-19T18:55:16.439118102Z" level=info msg="ignoring event" container=0034cc86dd648fb170852861e24b1a17ebb399fe8c4ba9824f484b9750e9d7b9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:55:16 addons-189999 dockerd[1160]: time="2024-09-19T18:55:16.857034186Z" level=info msg="ignoring event" container=b0e35dab2a330cbbe3db4fd3b5b7a154567325fd650f3c4384766b789fa9a896 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:55:16 addons-189999 dockerd[1160]: time="2024-09-19T18:55:16.858798695Z" level=info msg="ignoring event" container=4de11eeda949f3ceb9ed793152927a75bdce3aa603db1def1ee8c7f942ba20f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:55:23 addons-189999 dockerd[1160]: time="2024-09-19T18:55:23.100121153Z" level=info msg="ignoring event" container=591ede95985269df8313ad57072a4994100721f1a9e3b2c0f2cdd3d9e3810fe4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:55:23 addons-189999 dockerd[1160]: time="2024-09-19T18:55:23.135116147Z" level=info msg="ignoring event" container=5baab25299417086fbf1ce5c49be9edde49c623cff07e39a7b3953835a63b285 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:55:23 addons-189999 dockerd[1160]: time="2024-09-19T18:55:23.400791650Z" level=info msg="ignoring event" container=df98aec26d20c7d5a0dd3939743b31663d8e9a4cf81ba9d6d48db0d20cda9462 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:55:23 addons-189999 dockerd[1160]: time="2024-09-19T18:55:23.491774590Z" level=info msg="ignoring event" container=cbfd67a37137e5c2d63e10a0a5ea363a4a77d816bdb01876786987978aa99d51 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:55:25 addons-189999 dockerd[1160]: time="2024-09-19T18:55:25.934327384Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 19 18:55:25 addons-189999 dockerd[1160]: time="2024-09-19T18:55:25.937309192Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 19 18:55:30 addons-189999 cri-dockerd[1417]: time="2024-09-19T18:55:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/268715422ba814222e0b3ff436c0bb3669c23329db45081a9043394862374ca8/resolv.conf as [nameserver 10.96.0.10 search kube-system.svc.cluster.local svc.cluster.local cluster.local us-east1-b.c.p79a29526b6c1e63c-tp.internal c.p79a29526b6c1e63c-tp.internal google.internal options ndots:5]"
	Sep 19 18:55:32 addons-189999 cri-dockerd[1417]: time="2024-09-19T18:55:32Z" level=info msg="Stop pulling image docker.io/alpine/helm:2.16.3: Status: Downloaded newer image for alpine/helm:2.16.3"
	Sep 19 18:55:32 addons-189999 dockerd[1160]: time="2024-09-19T18:55:32.768401913Z" level=info msg="ignoring event" container=762cd0238a31e3e47d5793b035031cadf7c0df364ac27a9fd1610a1f4d0499ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:55:32 addons-189999 dockerd[1160]: time="2024-09-19T18:55:32.798483924Z" level=warning msg="failed to close stdin: NotFound: task 762cd0238a31e3e47d5793b035031cadf7c0df364ac27a9fd1610a1f4d0499ec not found: not found"
	Sep 19 18:55:34 addons-189999 dockerd[1160]: time="2024-09-19T18:55:34.410430702Z" level=info msg="ignoring event" container=268715422ba814222e0b3ff436c0bb3669c23329db45081a9043394862374ca8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:55:35 addons-189999 dockerd[1160]: time="2024-09-19T18:55:35.259205933Z" level=info msg="ignoring event" container=c72dfadf2f5fda9cce363b516f4268e0c4882c6f48ec7f2ea776137814934881 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:55:35 addons-189999 dockerd[1160]: time="2024-09-19T18:55:35.468290162Z" level=info msg="ignoring event" container=ecbf4efce2761f545c3dc68f9a259f07b4427e95504ddd64a8f278ba48787b64 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:55:43 addons-189999 dockerd[1160]: time="2024-09-19T18:55:43.249926742Z" level=info msg="ignoring event" container=94cff670fbe03b6c14425d305489db2c748016c6bb966303be6ada357995928b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:55:43 addons-189999 dockerd[1160]: time="2024-09-19T18:55:43.404544963Z" level=info msg="ignoring event" container=c334e5da8abfd03018672b2d37a30a208d55c6a4b19257fc94a4a8286793cac0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:55:45 addons-189999 dockerd[1160]: time="2024-09-19T18:55:45.866830642Z" level=info msg="ignoring event" container=78e64e58804d9ce89ed1f45b692fbce48246e981135d3895169f94fef2a02871 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:55:46 addons-189999 dockerd[1160]: time="2024-09-19T18:55:46.972500099Z" level=info msg="ignoring event" container=2ed8e66ba9c9800648f6e017193f150b9a6f86ebc65837ab3d8860a88e37448b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:55:47 addons-189999 dockerd[1160]: time="2024-09-19T18:55:47.203021341Z" level=info msg="ignoring event" container=561753fe013e1ed31a4253489d4bc9a97119e331910af73048d196f0fe04ad3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:55:47 addons-189999 dockerd[1160]: time="2024-09-19T18:55:47.392142368Z" level=info msg="ignoring event" container=6549aac80459c0fb1dfb9d495440e06e5ef4066d88b5413737b4f537e3d52571 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:55:47 addons-189999 dockerd[1160]: time="2024-09-19T18:55:47.632014669Z" level=info msg="ignoring event" container=384af66adbbc4ecbb984ff92e4d7ed87668530f78d391350f97f647c4d2b01ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:55:48 addons-189999 dockerd[1160]: time="2024-09-19T18:55:48.700570989Z" level=info msg="ignoring event" container=bcc6d8064d93f8c4dfa2129e47f3fe5e8b352b220f7b135b00a680b76ddcc8f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	762cd0238a31e       alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                          17 seconds ago      Exited              helm-test                  0                   268715422ba81       helm-test
	755ca1e6f51a3       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago      Running             gcp-auth                   0                   b26713ffc2cae       gcp-auth-89d5ffd79-nk9tl
	86f545a21ffd5       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                 0                   40076a3b7cc1c       ingress-nginx-controller-bc57996ff-dj4x9
	f8b0b98be74fc       ce263a8653f9c                                                                                                                11 minutes ago      Exited              patch                      1                   79c1ae7f9d9bc       ingress-nginx-admission-patch-jp4d8
	2ff90e69d07ad       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                     0                   1f127673b1596       ingress-nginx-admission-create-9t7mz
	6e858df5c2151       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        12 minutes ago      Running             yakd                       0                   2ca1178f3a083       yakd-dashboard-67d98fc6b-fwl6s
	27538b9c7483d       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner     0                   30e3d0f209eba       local-path-provisioner-86d989889c-2h4mm
	4a5670688e87e       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns       0                   d9eca8c2193f5       kube-ingress-dns-minikube
	eecadf2209fcb       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator     0                   4c78d50f1bbb0       cloud-spanner-emulator-769b77f747-rmzgb
	b1ef83395f2d3       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   ea730c48f3ce8       nvidia-device-plugin-daemonset-r9fh5
	37d03a6bd7fe1       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner        0                   be50e6871e0ca       storage-provisioner
	a81bfeb8fa3c8       c69fa2e9cbf5f                                                                                                                13 minutes ago      Running             coredns                    0                   8ea2bdcb4198b       coredns-7c65d6cfc9-6w8rx
	8658d593948a5       60c005f310ff3                                                                                                                13 minutes ago      Running             kube-proxy                 0                   0ac616d7d1a3e       kube-proxy-zhhkx
	8e9ee95caf4c9       175ffd71cce3d                                                                                                                13 minutes ago      Running             kube-controller-manager    0                   91a51861a51ca       kube-controller-manager-addons-189999
	df2415688c3e4       9aa1fad941575                                                                                                                13 minutes ago      Running             kube-scheduler             0                   49d8ac4b60a83       kube-scheduler-addons-189999
	b43b0b5891fae       2e96e5913fc06                                                                                                                13 minutes ago      Running             etcd                       0                   2bd1105820f63       etcd-addons-189999
	2bbdcf307b2d1       6bab7719df100                                                                                                                13 minutes ago      Running             kube-apiserver             0                   f725741b3bf38       kube-apiserver-addons-189999
	
	
	==> controller_ingress [86f545a21ffd] <==
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	W0919 18:44:16.904730       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0919 18:44:16.905124       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0919 18:44:16.912684       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/amd64"
	I0919 18:44:17.317666       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0919 18:44:17.377976       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0919 18:44:17.414175       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0919 18:44:17.463501       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"9ed2aef8-6bf3-465b-9858-240e46c043e1", APIVersion:"v1", ResourceVersion:"717", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0919 18:44:17.489404       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"47d7aef9-d66d-41e2-a719-130049babab5", APIVersion:"v1", ResourceVersion:"728", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0919 18:44:17.490034       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"997d8e50-cbab-4e9c-914f-a6385b55e677", APIVersion:"v1", ResourceVersion:"737", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0919 18:44:18.620794       7 nginx.go:317] "Starting NGINX process"
	I0919 18:44:18.622026       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0919 18:44:18.626500       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0919 18:44:18.637270       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0919 18:44:18.648109       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0919 18:44:18.649353       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-dj4x9"
	I0919 18:44:18.795533       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-dj4x9" node="addons-189999"
	I0919 18:44:18.816150       7 controller.go:213] "Backend successfully reloaded"
	I0919 18:44:18.816618       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0919 18:44:18.817256       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-dj4x9", UID:"63d5aaf0-cff3-471d-add3-98aa53e39099", APIVersion:"v1", ResourceVersion:"831", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [a81bfeb8fa3c] <==
	[INFO] 10.244.0.8:34148 - 18244 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000107423s
	[INFO] 10.244.0.8:37556 - 59636 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000098213s
	[INFO] 10.244.0.8:37556 - 13560 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105647s
	[INFO] 10.244.0.8:54537 - 34181 "AAAA IN registry.kube-system.svc.cluster.local.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 99 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000089013s
	[INFO] 10.244.0.8:54537 - 26496 "A IN registry.kube-system.svc.cluster.local.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 99 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000136261s
	[INFO] 10.244.0.8:51997 - 46430 "AAAA IN registry.kube-system.svc.cluster.local.c.p79a29526b6c1e63c-tp.internal. udp 88 false 512" NXDOMAIN qr,aa,rd,ra 193 0.000088836s
	[INFO] 10.244.0.8:51997 - 13147 "A IN registry.kube-system.svc.cluster.local.c.p79a29526b6c1e63c-tp.internal. udp 88 false 512" NXDOMAIN qr,aa,rd,ra 193 0.000109899s
	[INFO] 10.244.0.8:52956 - 22996 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000149059s
	[INFO] 10.244.0.8:52956 - 53719 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000171246s
	[INFO] 10.244.0.8:53926 - 22663 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000196097s
	[INFO] 10.244.0.8:53926 - 49029 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000336423s
	[INFO] 10.244.0.26:50796 - 7716 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000443221s
	[INFO] 10.244.0.26:58407 - 30287 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000164293s
	[INFO] 10.244.0.26:46697 - 30662 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000186557s
	[INFO] 10.244.0.26:36479 - 4867 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000344826s
	[INFO] 10.244.0.26:42686 - 38828 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129518s
	[INFO] 10.244.0.26:41435 - 37752 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000122349s
	[INFO] 10.244.0.26:34381 - 39496 "A IN storage.googleapis.com.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 190 0.004368535s
	[INFO] 10.244.0.26:53306 - 26153 "AAAA IN storage.googleapis.com.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 190 0.003849188s
	[INFO] 10.244.0.26:50719 - 30801 "A IN storage.googleapis.com.c.p79a29526b6c1e63c-tp.internal. udp 83 false 1232" NXDOMAIN qr,rd,ra 177 0.003186675s
	[INFO] 10.244.0.26:38930 - 23676 "AAAA IN storage.googleapis.com.c.p79a29526b6c1e63c-tp.internal. udp 83 false 1232" NXDOMAIN qr,rd,ra 177 0.00524344s
	[INFO] 10.244.0.26:56940 - 26008 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00374721s
	[INFO] 10.244.0.26:59153 - 16583 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003160184s
	[INFO] 10.244.0.26:54629 - 54610 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002522899s
	[INFO] 10.244.0.26:60628 - 10016 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.007408253s
	
	
	==> describe nodes <==
	Name:               addons-189999
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-189999
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=addons-189999
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T18_42_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-189999
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 18:42:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-189999
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 18:55:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 18:51:43 +0000   Thu, 19 Sep 2024 18:42:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 18:51:43 +0000   Thu, 19 Sep 2024 18:42:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 18:51:43 +0000   Thu, 19 Sep 2024 18:42:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 18:51:43 +0000   Thu, 19 Sep 2024 18:42:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-189999
	Capacity:
	  cpu:                2
	  ephemeral-storage:  119475748Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             8141772Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  119475748Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             8141772Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f2a6bae12eb4806b30efa4c3b8186eb
	  System UUID:                5f1290ea-89cd-4bc4-822d-85a542b27bf1
	  Boot ID:                    2e3a79ab-9a19-4953-9f20-4adc67c2013f
	  Kernel Version:             6.1.100+
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m20s
	  default                     cloud-spanner-emulator-769b77f747-rmzgb     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  gcp-auth                    gcp-auth-89d5ffd79-nk9tl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-dj4x9    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-6w8rx                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-addons-189999                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-189999                250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-189999       200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-zhhkx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-189999                100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 nvidia-device-plugin-daemonset-r9fh5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-2h4mm     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-fwl6s              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             388Mi (4%)  426Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-189999 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node addons-189999 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x8 over 13m)  kubelet          Node addons-189999 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node addons-189999 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node addons-189999 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node addons-189999 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node addons-189999 event: Registered Node addons-189999 in Controller
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 82 3b db ed 7d 45 08 06
	[  +2.266500] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 46 c5 5a 01 c2 37 08 06
	[  +0.031803] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de be 5d 7e 44 9a 08 06
	[Sep19 18:44] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000019] ll header: 00000000: ff ff ff ff ff ff 9a 72 45 6d 27 cc 08 06
	[ +15.328210] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 f1 d4 5b 0c 94 08 06
	[  +3.409374] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a 51 80 ca 37 11 08 06
	[  +0.502353] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 96 0c 9e 0f c0 98 08 06
	[  +0.366981] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 22 ea fd 4a 45 0c 08 06
	[Sep19 18:45] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 4e 7d cf 32 71 08 06
	[  +0.306108] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9e f8 65 d9 28 74 08 06
	[ +24.898049] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 76 5a 9f 2f e3 4a 08 06
	[  +0.001228] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 03 07 93 2f ad 08 06
	[Sep19 18:55] IPv4: martian source 10.244.0.1 from 10.244.0.33, on dev eth0
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 7e 12 ef c3 1e 19 08 06
	
	
	==> etcd [b43b0b5891fa] <==
	{"level":"info","ts":"2024-09-19T18:44:12.012090Z","caller":"traceutil/trace.go:171","msg":"trace[1173204301] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1277; }","duration":"264.532918ms","start":"2024-09-19T18:44:11.747547Z","end":"2024-09-19T18:44:12.012080Z","steps":["trace[1173204301] 'range keys from in-memory index tree'  (duration: 264.289706ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:44:12.012357Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"247.781747ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T18:44:12.012508Z","caller":"traceutil/trace.go:171","msg":"trace[1518400510] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1277; }","duration":"247.91925ms","start":"2024-09-19T18:44:11.764553Z","end":"2024-09-19T18:44:12.012472Z","steps":["trace[1518400510] 'range keys from in-memory index tree'  (duration: 247.701753ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:44:12.012793Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.315891ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T18:44:12.012898Z","caller":"traceutil/trace.go:171","msg":"trace[2084757870] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1277; }","duration":"197.422092ms","start":"2024-09-19T18:44:11.815466Z","end":"2024-09-19T18:44:12.012888Z","steps":["trace[2084757870] 'range keys from in-memory index tree'  (duration: 197.246529ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:44:16.254690Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"360.861325ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T18:44:16.254767Z","caller":"traceutil/trace.go:171","msg":"trace[1702632630] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1300; }","duration":"360.958307ms","start":"2024-09-19T18:44:15.893792Z","end":"2024-09-19T18:44:16.254751Z","steps":["trace[1702632630] 'range keys from in-memory index tree'  (duration: 360.753835ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:44:16.254804Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T18:44:15.893738Z","time spent":"361.056715ms","remote":"127.0.0.1:37882","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-09-19T18:44:16.255031Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"360.948798ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-19T18:44:16.255105Z","caller":"traceutil/trace.go:171","msg":"trace[1571851353] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:0; response_revision:1300; }","duration":"361.026915ms","start":"2024-09-19T18:44:15.894065Z","end":"2024-09-19T18:44:16.255092Z","steps":["trace[1571851353] 'count revisions from in-memory index tree'  (duration: 360.846982ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:44:16.255143Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T18:44:15.894034Z","time spent":"361.097366ms","remote":"127.0.0.1:38218","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":2,"response size":31,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true "}
	{"level":"warn","ts":"2024-09-19T18:44:16.259296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"332.419347ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-19T18:44:16.259350Z","caller":"traceutil/trace.go:171","msg":"trace[1508854695] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; response_count:0; response_revision:1300; }","duration":"332.782693ms","start":"2024-09-19T18:44:15.926557Z","end":"2024-09-19T18:44:16.259339Z","steps":["trace[1508854695] 'count revisions from in-memory index tree'  (duration: 332.338871ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:44:16.259387Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T18:44:15.926501Z","time spent":"332.873842ms","remote":"127.0.0.1:38048","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":29,"response size":31,"request content":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" count_only:true "}
	{"level":"warn","ts":"2024-09-19T18:44:16.259664Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.483476ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotcontents/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotcontents0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T18:44:16.259698Z","caller":"traceutil/trace.go:171","msg":"trace[2011501208] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotcontents/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshotcontents0; response_count:0; response_revision:1300; }","duration":"221.527673ms","start":"2024-09-19T18:44:16.038160Z","end":"2024-09-19T18:44:16.259688Z","steps":["trace[2011501208] 'count revisions from in-memory index tree'  (duration: 221.417592ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:44:16.260383Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.110636ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T18:44:16.260438Z","caller":"traceutil/trace.go:171","msg":"trace[1588858913] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1300; }","duration":"137.168127ms","start":"2024-09-19T18:44:16.123255Z","end":"2024-09-19T18:44:16.260424Z","steps":["trace[1588858913] 'range keys from in-memory index tree'  (duration: 137.049247ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:46:09.728561Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.330508ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T18:46:09.728668Z","caller":"traceutil/trace.go:171","msg":"trace[2143349949] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1625; }","duration":"159.501531ms","start":"2024-09-19T18:46:09.569149Z","end":"2024-09-19T18:46:09.728651Z","steps":["trace[2143349949] 'range keys from in-memory index tree'  (duration: 159.234257ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:52:27.104722Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1898}
	{"level":"warn","ts":"2024-09-19T18:52:27.438120Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.960586ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032005984583225 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:2324 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-19T18:52:27.438613Z","caller":"traceutil/trace.go:171","msg":"trace[1254059594] transaction","detail":"{read_only:false; response_revision:2328; number_of_response:1; }","duration":"127.809089ms","start":"2024-09-19T18:52:27.310778Z","end":"2024-09-19T18:52:27.438587Z","steps":["trace[1254059594] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; req_size:1095; } (duration: 124.833242ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:52:27.456891Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1898,"took":"350.842581ms","hash":4206004966,"current-db-size-bytes":9105408,"current-db-size":"9.1 MB","current-db-size-in-use-bytes":5050368,"current-db-size-in-use":"5.1 MB"}
	{"level":"info","ts":"2024-09-19T18:52:27.456959Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4206004966,"revision":1898,"compact-revision":-1}
	
	
	==> gcp-auth [755ca1e6f51a] <==
	2024/09/19 18:45:40 GCP Auth Webhook started!
	2024/09/19 18:45:59 Ready to marshal response ...
	2024/09/19 18:45:59 Ready to write response ...
	2024/09/19 18:46:00 Ready to marshal response ...
	2024/09/19 18:46:00 Ready to write response ...
	2024/09/19 18:46:28 Ready to marshal response ...
	2024/09/19 18:46:28 Ready to write response ...
	2024/09/19 18:46:29 Ready to marshal response ...
	2024/09/19 18:46:29 Ready to write response ...
	2024/09/19 18:46:29 Ready to marshal response ...
	2024/09/19 18:46:29 Ready to write response ...
	2024/09/19 18:54:45 Ready to marshal response ...
	2024/09/19 18:54:45 Ready to write response ...
	2024/09/19 18:54:51 Ready to marshal response ...
	2024/09/19 18:54:51 Ready to write response ...
	2024/09/19 18:55:05 Ready to marshal response ...
	2024/09/19 18:55:05 Ready to write response ...
	2024/09/19 18:55:29 Ready to marshal response ...
	2024/09/19 18:55:29 Ready to write response ...
	
	
	==> kernel <==
	 18:55:49 up 37 min,  0 users,  load average: 0.33, 1.06, 1.22
	Linux addons-189999 6.1.100+ #1 SMP PREEMPT_DYNAMIC Sat Aug 17 14:12:26 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [2bbdcf307b2d] <==
	I0919 18:46:21.005101       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0919 18:46:21.055341       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0919 18:46:21.294108       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0919 18:46:21.313765       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0919 18:46:21.411413       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0919 18:46:21.766868       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0919 18:46:22.056069       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0919 18:46:22.451462       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0919 18:54:59.370685       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0919 18:55:22.713811       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:55:22.714257       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:55:22.755465       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:55:22.755564       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:55:22.780572       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:55:22.780972       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:55:22.797276       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:55:22.797339       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:55:23.146134       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:55:23.146289       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0919 18:55:23.797637       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0919 18:55:24.145785       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0919 18:55:24.183996       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0919 18:55:32.744107       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.33:41538: read: connection reset by peer
	I0919 18:55:48.476705       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0919 18:55:49.567112       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [8e9ee95caf4c] <==
	E0919 18:55:31.624983       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:55:32.199170       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:55:32.199231       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:55:32.537919       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:55:32.538208       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:55:35.145255       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="15.261µs"
	I0919 18:55:39.992256       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0919 18:55:39.992326       1 shared_informer.go:320] Caches are synced for resource quota
	I0919 18:55:40.018813       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0919 18:55:40.019007       1 shared_informer.go:320] Caches are synced for garbage collector
	W0919 18:55:41.011656       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:55:41.011719       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:55:41.729055       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:55:41.729497       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:55:42.094996       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="7.172µs"
	W0919 18:55:43.748567       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:55:43.748662       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:55:44.817735       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:55:44.817788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:55:44.914562       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:55:44.914623       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:55:46.869047       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="7.781µs"
	W0919 18:55:46.947557       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:55:46.947621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0919 18:55:49.570292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [8658d593948a] <==
	I0919 18:42:44.742482       1 server_linux.go:66] "Using iptables proxy"
	I0919 18:42:47.735054       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0919 18:42:47.735173       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 18:42:48.112697       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 18:42:48.113238       1 server_linux.go:169] "Using iptables Proxier"
	I0919 18:42:48.117795       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 18:42:48.118563       1 server.go:483] "Version info" version="v1.31.1"
	I0919 18:42:48.144925       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 18:42:48.386728       1 config.go:199] "Starting service config controller"
	I0919 18:42:48.402993       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 18:42:48.406049       1 config.go:105] "Starting endpoint slice config controller"
	I0919 18:42:48.407000       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 18:42:48.409858       1 config.go:328] "Starting node config controller"
	I0919 18:42:48.416979       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 18:42:48.620237       1 shared_informer.go:320] Caches are synced for node config
	I0919 18:42:48.620281       1 shared_informer.go:320] Caches are synced for service config
	I0919 18:42:48.620323       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [df2415688c3e] <==
	W0919 18:42:29.147304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 18:42:29.147886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:42:29.147320       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 18:42:29.148404       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:42:29.973784       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0919 18:42:29.973860       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:42:30.032966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 18:42:30.033408       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:42:30.075152       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 18:42:30.075604       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0919 18:42:30.086189       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 18:42:30.086797       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:42:30.168727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 18:42:30.169097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:42:30.321191       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 18:42:30.321621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:42:30.341197       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0919 18:42:30.341598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:42:30.374657       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 18:42:30.374710       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:42:30.430003       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 18:42:30.430056       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:42:30.454745       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0919 18:42:30.454801       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0919 18:42:32.105519       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 19 18:55:48 addons-189999 kubelet[2181]: I0919 18:55:48.826444    2181 scope.go:117] "RemoveContainer" containerID="158ab8c2075f91e6c6ed1efb78145a3fdf32c6269090fe08fbeff219b2e2c290"
	Sep 19 18:55:48 addons-189999 kubelet[2181]: I0919 18:55:48.863517    2181 scope.go:117] "RemoveContainer" containerID="561753fe013e1ed31a4253489d4bc9a97119e331910af73048d196f0fe04ad3e"
	Sep 19 18:55:48 addons-189999 kubelet[2181]: I0919 18:55:48.916374    2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0ef91b5-18ff-44b6-822d-ccea51ffe650-host\") pod \"d0ef91b5-18ff-44b6-822d-ccea51ffe650\" (UID: \"d0ef91b5-18ff-44b6-822d-ccea51ffe650\") "
	Sep 19 18:55:48 addons-189999 kubelet[2181]: I0919 18:55:48.916436    2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/d0ef91b5-18ff-44b6-822d-ccea51ffe650-bpffs\") pod \"d0ef91b5-18ff-44b6-822d-ccea51ffe650\" (UID: \"d0ef91b5-18ff-44b6-822d-ccea51ffe650\") "
	Sep 19 18:55:48 addons-189999 kubelet[2181]: I0919 18:55:48.916478    2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/d0ef91b5-18ff-44b6-822d-ccea51ffe650-modules\") pod \"d0ef91b5-18ff-44b6-822d-ccea51ffe650\" (UID: \"d0ef91b5-18ff-44b6-822d-ccea51ffe650\") "
	Sep 19 18:55:48 addons-189999 kubelet[2181]: I0919 18:55:48.916514    2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/d0ef91b5-18ff-44b6-822d-ccea51ffe650-debugfs\") pod \"d0ef91b5-18ff-44b6-822d-ccea51ffe650\" (UID: \"d0ef91b5-18ff-44b6-822d-ccea51ffe650\") "
	Sep 19 18:55:48 addons-189999 kubelet[2181]: I0919 18:55:48.916551    2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d0ef91b5-18ff-44b6-822d-ccea51ffe650-run\") pod \"d0ef91b5-18ff-44b6-822d-ccea51ffe650\" (UID: \"d0ef91b5-18ff-44b6-822d-ccea51ffe650\") "
	Sep 19 18:55:48 addons-189999 kubelet[2181]: I0919 18:55:48.916608    2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg6cr\" (UniqueName: \"kubernetes.io/projected/d0ef91b5-18ff-44b6-822d-ccea51ffe650-kube-api-access-tg6cr\") pod \"d0ef91b5-18ff-44b6-822d-ccea51ffe650\" (UID: \"d0ef91b5-18ff-44b6-822d-ccea51ffe650\") "
	Sep 19 18:55:48 addons-189999 kubelet[2181]: I0919 18:55:48.916651    2181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/d0ef91b5-18ff-44b6-822d-ccea51ffe650-cgroup\") pod \"d0ef91b5-18ff-44b6-822d-ccea51ffe650\" (UID: \"d0ef91b5-18ff-44b6-822d-ccea51ffe650\") "
	Sep 19 18:55:48 addons-189999 kubelet[2181]: I0919 18:55:48.917122    2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0ef91b5-18ff-44b6-822d-ccea51ffe650-modules" (OuterVolumeSpecName: "modules") pod "d0ef91b5-18ff-44b6-822d-ccea51ffe650" (UID: "d0ef91b5-18ff-44b6-822d-ccea51ffe650"). InnerVolumeSpecName "modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 19 18:55:48 addons-189999 kubelet[2181]: I0919 18:55:48.917173    2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0ef91b5-18ff-44b6-822d-ccea51ffe650-host" (OuterVolumeSpecName: "host") pod "d0ef91b5-18ff-44b6-822d-ccea51ffe650" (UID: "d0ef91b5-18ff-44b6-822d-ccea51ffe650"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 19 18:55:48 addons-189999 kubelet[2181]: I0919 18:55:48.917199    2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0ef91b5-18ff-44b6-822d-ccea51ffe650-bpffs" (OuterVolumeSpecName: "bpffs") pod "d0ef91b5-18ff-44b6-822d-ccea51ffe650" (UID: "d0ef91b5-18ff-44b6-822d-ccea51ffe650"). InnerVolumeSpecName "bpffs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 19 18:55:48 addons-189999 kubelet[2181]: I0919 18:55:48.917246    2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0ef91b5-18ff-44b6-822d-ccea51ffe650-run" (OuterVolumeSpecName: "run") pod "d0ef91b5-18ff-44b6-822d-ccea51ffe650" (UID: "d0ef91b5-18ff-44b6-822d-ccea51ffe650"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 19 18:55:48 addons-189999 kubelet[2181]: I0919 18:55:48.917273    2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0ef91b5-18ff-44b6-822d-ccea51ffe650-debugfs" (OuterVolumeSpecName: "debugfs") pod "d0ef91b5-18ff-44b6-822d-ccea51ffe650" (UID: "d0ef91b5-18ff-44b6-822d-ccea51ffe650"). InnerVolumeSpecName "debugfs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 19 18:55:48 addons-189999 kubelet[2181]: I0919 18:55:48.916836    2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0ef91b5-18ff-44b6-822d-ccea51ffe650-cgroup" (OuterVolumeSpecName: "cgroup") pod "d0ef91b5-18ff-44b6-822d-ccea51ffe650" (UID: "d0ef91b5-18ff-44b6-822d-ccea51ffe650"). InnerVolumeSpecName "cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 19 18:55:48 addons-189999 kubelet[2181]: I0919 18:55:48.922008    2181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0ef91b5-18ff-44b6-822d-ccea51ffe650-kube-api-access-tg6cr" (OuterVolumeSpecName: "kube-api-access-tg6cr") pod "d0ef91b5-18ff-44b6-822d-ccea51ffe650" (UID: "d0ef91b5-18ff-44b6-822d-ccea51ffe650"). InnerVolumeSpecName "kube-api-access-tg6cr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:55:49 addons-189999 kubelet[2181]: I0919 18:55:49.017130    2181 reconciler_common.go:288] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d0ef91b5-18ff-44b6-822d-ccea51ffe650-run\") on node \"addons-189999\" DevicePath \"\""
	Sep 19 18:55:49 addons-189999 kubelet[2181]: I0919 18:55:49.017192    2181 reconciler_common.go:288] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/d0ef91b5-18ff-44b6-822d-ccea51ffe650-debugfs\") on node \"addons-189999\" DevicePath \"\""
	Sep 19 18:55:49 addons-189999 kubelet[2181]: I0919 18:55:49.017213    2181 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tg6cr\" (UniqueName: \"kubernetes.io/projected/d0ef91b5-18ff-44b6-822d-ccea51ffe650-kube-api-access-tg6cr\") on node \"addons-189999\" DevicePath \"\""
	Sep 19 18:55:49 addons-189999 kubelet[2181]: I0919 18:55:49.017232    2181 reconciler_common.go:288] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/d0ef91b5-18ff-44b6-822d-ccea51ffe650-cgroup\") on node \"addons-189999\" DevicePath \"\""
	Sep 19 18:55:49 addons-189999 kubelet[2181]: I0919 18:55:49.017248    2181 reconciler_common.go:288] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0ef91b5-18ff-44b6-822d-ccea51ffe650-host\") on node \"addons-189999\" DevicePath \"\""
	Sep 19 18:55:49 addons-189999 kubelet[2181]: I0919 18:55:49.017264    2181 reconciler_common.go:288] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/d0ef91b5-18ff-44b6-822d-ccea51ffe650-bpffs\") on node \"addons-189999\" DevicePath \"\""
	Sep 19 18:55:49 addons-189999 kubelet[2181]: I0919 18:55:49.017280    2181 reconciler_common.go:288] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/d0ef91b5-18ff-44b6-822d-ccea51ffe650-modules\") on node \"addons-189999\" DevicePath \"\""
	Sep 19 18:55:49 addons-189999 kubelet[2181]: I0919 18:55:49.929034    2181 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c2f5817-bdcc-4c04-b033-af85ade76356" path="/var/lib/kubelet/pods/0c2f5817-bdcc-4c04-b033-af85ade76356/volumes"
	Sep 19 18:55:49 addons-189999 kubelet[2181]: I0919 18:55:49.930340    2181 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0ef91b5-18ff-44b6-822d-ccea51ffe650" path="/var/lib/kubelet/pods/d0ef91b5-18ff-44b6-822d-ccea51ffe650/volumes"
	
	
	==> storage-provisioner [37d03a6bd7fe] <==
	I0919 18:42:54.365626       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 18:42:54.693809       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 18:42:54.693916       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 18:42:55.116755       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 18:42:55.117108       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-189999_cfcd86c9-a79c-43e7-8696-607dc440b99f!
	I0919 18:42:55.152158       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"18b23e10-7821-4668-a89e-19431b59a782", APIVersion:"v1", ResourceVersion:"703", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-189999_cfcd86c9-a79c-43e7-8696-607dc440b99f became leader
	I0919 18:42:55.557146       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-189999_cfcd86c9-a79c-43e7-8696-607dc440b99f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-189999 -n addons-189999
helpers_test.go:261: (dbg) Run:  kubectl --context addons-189999 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-9t7mz ingress-nginx-admission-patch-jp4d8
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-189999 describe pod busybox ingress-nginx-admission-create-9t7mz ingress-nginx-admission-patch-jp4d8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-189999 describe pod busybox ingress-nginx-admission-create-9t7mz ingress-nginx-admission-patch-jp4d8: exit status 1 (122.812856ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-189999/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 18:46:29 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w2gjr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-w2gjr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason          Age                     From               Message
	  ----     ------          ----                    ----               -------
	  Normal   Scheduled       9m22s                   default-scheduler  Successfully assigned default/busybox to addons-189999
	  Normal   SandboxChanged  9m21s                   kubelet            Pod sandbox changed, it will be killed and re-created.
	  Normal   Pulling         7m59s (x4 over 9m22s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed          7m59s (x4 over 9m22s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed          7m59s (x4 over 9m22s)   kubelet            Error: ErrImagePull
	  Warning  Failed          7m46s (x6 over 9m20s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff         4m15s (x22 over 9m20s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-9t7mz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-jp4d8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-189999 describe pod busybox ingress-nginx-admission-create-9t7mz ingress-nginx-admission-patch-jp4d8: exit status 1
--- FAIL: TestAddons/parallel/Registry (76.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (180.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
E0919 19:00:43.358699    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:00:44.641116    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
E0919 19:02:04.011121    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
2024/09/19 19:02:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
E0919 19:03:25.934280    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Non-zero exit: kubectl --context functional-548331 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}: signal: killed (2.498009ms)
functional_test_tunnel_test.go:245: nginx-svc svc.status.loadBalancer.ingress never got an IP: signal: killed
functional_test_tunnel_test.go:246: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc
functional_test_tunnel_test.go:250: failed to kubectl get svc nginx-svc:

                                                
                                                
-- stdout --
	NAME        TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
	nginx-svc   LoadBalancer   10.99.133.94   <pending>     80:30199/TCP   3m9s

                                                
                                                
-- /stdout --
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (180.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-548331 /tmp/TestFunctionalparallelMountCmdany-port1995812660/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726772481315491671" to /tmp/TestFunctionalparallelMountCmdany-port1995812660/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726772481315491671" to /tmp/TestFunctionalparallelMountCmdany-port1995812660/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726772481315491671" to /tmp/TestFunctionalparallelMountCmdany-port1995812660/001/test-1726772481315491671
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (608.267482ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:01:21.969982    7874 retry.go:31] will retry after 250.471034ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (391.015984ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:01:22.612725    7874 retry.go:31] will retry after 554.872933ms: exit status 1
E0919 19:01:23.049095    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (430.934161ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:01:23.599052    7874 retry.go:31] will retry after 1.287613644s: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (401.859656ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:01:25.288985    7874 retry.go:31] will retry after 1.422199771s: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (404.554148ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:01:27.117265    7874 retry.go:31] will retry after 3.778740324s: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (399.564157ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:125: /mount-9p did not appear within 9.980656926s: exit status 1
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (400.31545ms)

                                                
                                                
-- stdout --
	ls: cannot access '/mount-9p': No such file or directory
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-548331 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "sudo umount -f /mount-9p": exit status 1 (391.509247ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: no mount point specified.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:92: "out/minikube-linux-amd64 -p functional-548331 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-548331 /tmp/TestFunctionalparallelMountCmdany-port1995812660/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-548331 /tmp/TestFunctionalparallelMountCmdany-port1995812660/001:/mount-9p --alsologtostderr -v=1] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-548331 /tmp/TestFunctionalparallelMountCmdany-port1995812660/001:/mount-9p --alsologtostderr -v=1] stderr:
I0919 19:01:21.435268   43652 out.go:345] Setting OutFile to fd 1 ...
I0919 19:01:21.435641   43652 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:01:21.435671   43652 out.go:358] Setting ErrFile to fd 2...
I0919 19:01:21.435690   43652 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:01:21.436173   43652 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/bin
I0919 19:01:21.436880   43652 mustload.go:65] Loading cluster: functional-548331
I0919 19:01:21.437647   43652 config.go:182] Loaded profile config "functional-548331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 19:01:21.438669   43652 cli_runner.go:164] Run: docker container inspect functional-548331 --format={{.State.Status}}
I0919 19:01:21.496021   43652 host.go:66] Checking if "functional-548331" exists ...
I0919 19:01:21.496558   43652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0919 19:01:21.699060   43652 info.go:266] docker info: {ID:084b1885-1b65-4927-baf7-da2e440f52c1 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:false NGoroutines:58 SystemTime:2024-09-19 19:01:21.673224846 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337174528 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0919 19:01:21.699314   43652 cli_runner.go:164] Run: docker network inspect functional-548331 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0919 19:01:21.765325   43652 out.go:201] 
W0919 19:01:21.767251   43652 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0919 19:01:21.769285   43652 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (10.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (15.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-548331 /tmp/TestFunctionalparallelMountCmdspecific-port1055815161/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (600.153591ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:01:32.800141    7874 retry.go:31] will retry after 600.83066ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (393.794281ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:01:33.796022    7874 retry.go:31] will retry after 616.702374ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (419.544525ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:01:34.833325    7874 retry.go:31] will retry after 1.571125834s: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (393.231077ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:01:36.798253    7874 retry.go:31] will retry after 992.459999ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (420.968935ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:01:38.212119    7874 retry.go:31] will retry after 3.497386161s: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (388.445366ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:01:42.099541    7874 retry.go:31] will retry after 4.183478868s: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (391.268152ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:253: /mount-9p did not appear within 14.47556988s: exit status 1
functional_test_mount_test.go:220: "TestFunctional/parallel/MountCmd/specific-port" failed, getting debug info...
functional_test_mount_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (406.681851ms)

                                                
                                                
-- stdout --
	ls: cannot access '/mount-9p': No such file or directory
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:223: debugging command "out/minikube-linux-amd64 -p functional-548331 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "sudo umount -f /mount-9p": exit status 1 (402.672687ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: no mount point specified.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-548331 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-548331 /tmp/TestFunctionalparallelMountCmdspecific-port1055815161/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-amd64 mount -p functional-548331 /tmp/TestFunctionalparallelMountCmdspecific-port1055815161/001:/mount-9p --alsologtostderr -v=1 --port 46464] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-amd64 mount -p functional-548331 /tmp/TestFunctionalparallelMountCmdspecific-port1055815161/001:/mount-9p --alsologtostderr -v=1 --port 46464] stderr:
I0919 19:01:32.328505   44235 out.go:345] Setting OutFile to fd 1 ...
I0919 19:01:32.328877   44235 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:01:32.328895   44235 out.go:358] Setting ErrFile to fd 2...
I0919 19:01:32.328903   44235 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:01:32.329266   44235 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/bin
I0919 19:01:32.329686   44235 mustload.go:65] Loading cluster: functional-548331
I0919 19:01:32.330279   44235 config.go:182] Loaded profile config "functional-548331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 19:01:32.331111   44235 cli_runner.go:164] Run: docker container inspect functional-548331 --format={{.State.Status}}
I0919 19:01:32.382473   44235 host.go:66] Checking if "functional-548331" exists ...
I0919 19:01:32.383048   44235 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0919 19:01:32.574941   44235 info.go:266] docker info: {ID:084b1885-1b65-4927-baf7-da2e440f52c1 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:false NGoroutines:58 SystemTime:2024-09-19 19:01:32.543491042 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337174528 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0919 19:01:32.575256   44235 cli_runner.go:164] Run: docker network inspect functional-548331 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0919 19:01:32.610877   44235 out.go:201] 
W0919 19:01:32.612894   44235 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0919 19:01:32.614644   44235 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/specific-port (15.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (12.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-548331 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2175917128/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-548331 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2175917128/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-548331 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2175917128/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T" /mount1: exit status 1 (1.254112986s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:01:48.854836    7874 retry.go:31] will retry after 603.612807ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T" /mount1: exit status 1 (401.577356ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:01:49.861343    7874 retry.go:31] will retry after 893.575139ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T" /mount1: exit status 1 (479.089949ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:01:51.234348    7874 retry.go:31] will retry after 641.780571ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T" /mount1: exit status 1 (404.428825ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:01:52.312627    7874 retry.go:31] will retry after 1.827007323s: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T" /mount1: exit status 1 (393.37307ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:01:54.533540    7874 retry.go:31] will retry after 1.576848799s: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T" /mount1: exit status 1 (415.549768ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:01:56.526654    7874 retry.go:31] will retry after 2.654929325s: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "findmnt -T" /mount1: exit status 1 (398.571263ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:342: mount was not ready in time: exit status 1
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-548331 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2175917128/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-548331 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2175917128/001:/mount1 --alsologtostderr -v=1] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-548331 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2175917128/001:/mount1 --alsologtostderr -v=1] stderr:
I0919 19:01:47.897652   44906 out.go:345] Setting OutFile to fd 1 ...
I0919 19:01:47.898801   44906 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:01:47.898833   44906 out.go:358] Setting ErrFile to fd 2...
I0919 19:01:47.898888   44906 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:01:47.899361   44906 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/bin
I0919 19:01:47.905780   44906 mustload.go:65] Loading cluster: functional-548331
I0919 19:01:47.906746   44906 config.go:182] Loaded profile config "functional-548331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 19:01:47.908024   44906 cli_runner.go:164] Run: docker container inspect functional-548331 --format={{.State.Status}}
I0919 19:01:48.001713   44906 host.go:66] Checking if "functional-548331" exists ...
I0919 19:01:48.002779   44906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0919 19:01:48.564443   44906 info.go:266] docker info: {ID:084b1885-1b65-4927-baf7-da2e440f52c1 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:false NGoroutines:58 SystemTime:2024-09-19 19:01:48.495724213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337174528 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0919 19:01:48.564707   44906 cli_runner.go:164] Run: docker network inspect functional-548331 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0919 19:01:48.708577   44906 out.go:201] 
W0919 19:01:48.710441   44906 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0919 19:01:48.712314   44906 out.go:201] 
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-548331 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2175917128/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-548331 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2175917128/001:/mount2 --alsologtostderr -v=1] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-548331 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2175917128/001:/mount2 --alsologtostderr -v=1] stderr:
I0919 19:01:47.891425   44907 out.go:345] Setting OutFile to fd 1 ...
I0919 19:01:47.893362   44907 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:01:47.893404   44907 out.go:358] Setting ErrFile to fd 2...
I0919 19:01:47.893424   44907 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:01:47.893953   44907 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/bin
I0919 19:01:47.894544   44907 mustload.go:65] Loading cluster: functional-548331
I0919 19:01:47.895338   44907 config.go:182] Loaded profile config "functional-548331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 19:01:47.898236   44907 cli_runner.go:164] Run: docker container inspect functional-548331 --format={{.State.Status}}
I0919 19:01:48.029731   44907 host.go:66] Checking if "functional-548331" exists ...
I0919 19:01:48.030593   44907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0919 19:01:48.643128   44907 info.go:266] docker info: {ID:084b1885-1b65-4927-baf7-da2e440f52c1 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:false NGoroutines:63 SystemTime:2024-09-19 19:01:48.548519344 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337174528 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0919 19:01:48.643425   44907 cli_runner.go:164] Run: docker network inspect functional-548331 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0919 19:01:48.731998   44907 out.go:201] 
W0919 19:01:48.734441   44907 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0919 19:01:48.736334   44907 out.go:201] 
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-548331 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2175917128/001:/mount3 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-548331 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2175917128/001:/mount3 --alsologtostderr -v=1] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-548331 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2175917128/001:/mount3 --alsologtostderr -v=1] stderr:
I0919 19:01:47.892994   44908 out.go:345] Setting OutFile to fd 1 ...
I0919 19:01:47.893359   44908 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:01:47.893379   44908 out.go:358] Setting ErrFile to fd 2...
I0919 19:01:47.893387   44908 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:01:47.894054   44908 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/bin
I0919 19:01:47.894532   44908 mustload.go:65] Loading cluster: functional-548331
I0919 19:01:47.903106   44908 config.go:182] Loaded profile config "functional-548331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 19:01:47.904773   44908 cli_runner.go:164] Run: docker container inspect functional-548331 --format={{.State.Status}}
I0919 19:01:47.997153   44908 host.go:66] Checking if "functional-548331" exists ...
I0919 19:01:47.997649   44908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0919 19:01:48.559340   44908 info.go:266] docker info: {ID:084b1885-1b65-4927-baf7-da2e440f52c1 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:false NGoroutines:58 SystemTime:2024-09-19 19:01:48.495724213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337174528 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0919 19:01:48.559616   44908 cli_runner.go:164] Run: docker network inspect functional-548331 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0919 19:01:48.732158   44908 out.go:201] 
W0919 19:01:48.734294   44908 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0919 19:01:48.736438   44908 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/VerifyCleanup (12.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (84.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0919 19:03:33.223953    7874 retry.go:31] will retry after 2.325410527s: Temporary Error: Get "http:": http: no Host in request URL
I0919 19:03:35.549956    7874 retry.go:31] will retry after 5.810749663s: Temporary Error: Get "http:": http: no Host in request URL
I0919 19:03:41.361247    7874 retry.go:31] will retry after 6.150915735s: Temporary Error: Get "http:": http: no Host in request URL
I0919 19:03:47.514018    7874 retry.go:31] will retry after 12.147595904s: Temporary Error: Get "http:": http: no Host in request URL
I0919 19:03:59.661923    7874 retry.go:31] will retry after 11.274600559s: Temporary Error: Get "http:": http: no Host in request URL
I0919 19:04:10.938029    7874 retry.go:31] will retry after 27.691883892s: Temporary Error: Get "http:": http: no Host in request URL
I0919 19:04:38.631997    7874 retry.go:31] will retry after 18.657718955s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-548331 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx-svc   LoadBalancer   10.99.133.94   <pending>     80:30199/TCP   4m33s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (84.21s)

                                                
                                    

Test pass (97/108)

Order passed test Duration
3 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.16
4 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.16
5 TestAddons/Setup 262.65
7 TestAddons/serial/Volcano 46.94
9 TestAddons/serial/GCPAuth/Namespaces 0.26
12 TestAddons/parallel/Ingress 23.67
13 TestAddons/parallel/InspektorGadget 12.01
14 TestAddons/parallel/MetricsServer 6.98
15 TestAddons/parallel/HelmTiller 11.65
17 TestAddons/parallel/CSI 49.21
18 TestAddons/parallel/Headlamp 14.55
19 TestAddons/parallel/CloudSpanner 6.74
20 TestAddons/parallel/LocalPath 56.6
21 TestAddons/parallel/NvidiaDevicePlugin 5.61
22 TestAddons/parallel/Yakd 11.91
23 TestAddons/StoppedEnableDisable 11.62
26 TestFunctional/serial/CopySyncFile 0.06
27 TestFunctional/serial/StartWithProxy 74.93
28 TestFunctional/serial/AuditLog 0
29 TestFunctional/serial/SoftStart 37.68
30 TestFunctional/serial/KubeContext 0.09
31 TestFunctional/serial/KubectlGetPods 0.12
34 TestFunctional/serial/CacheCmd/cache/add_remote 2.89
35 TestFunctional/serial/CacheCmd/cache/add_local 1.43
36 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.11
37 TestFunctional/serial/CacheCmd/cache/list 0.09
38 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.48
39 TestFunctional/serial/CacheCmd/cache/cache_reload 1.95
40 TestFunctional/serial/CacheCmd/cache/delete 0.18
41 TestFunctional/serial/MinikubeKubectlCmd 1.11
42 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.34
43 TestFunctional/serial/ExtraConfig 53.33
44 TestFunctional/serial/ComponentHealth 0.12
45 TestFunctional/serial/LogsCmd 1.61
46 TestFunctional/serial/LogsFileCmd 1.65
47 TestFunctional/serial/InvalidService 5.32
49 TestFunctional/parallel/ConfigCmd 1.83
50 TestFunctional/parallel/DashboardCmd 16.49
51 TestFunctional/parallel/DryRun 0.67
52 TestFunctional/parallel/InternationalLanguage 0.36
53 TestFunctional/parallel/StatusCmd 1.64
57 TestFunctional/parallel/ServiceCmdConnect 12.96
58 TestFunctional/parallel/AddonsCmd 0.23
59 TestFunctional/parallel/PersistentVolumeClaim 30.62
61 TestFunctional/parallel/SSHCmd 3.47
62 TestFunctional/parallel/CpCmd 10.95
63 TestFunctional/parallel/MySQL 35.03
64 TestFunctional/parallel/FileSync 0.42
65 TestFunctional/parallel/CertSync 2.64
69 TestFunctional/parallel/NodeLabels 0.1
71 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
73 TestFunctional/parallel/License 1.52
75 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 4.47
76 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
78 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.31
80 TestFunctional/parallel/ServiceCmd/DeployApp 8.3
81 TestFunctional/parallel/ServiceCmd/List 0.67
82 TestFunctional/parallel/ServiceCmd/JSONOutput 0.71
83 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
84 TestFunctional/parallel/ServiceCmd/Format 0.62
85 TestFunctional/parallel/ServiceCmd/URL 0.57
86 TestFunctional/parallel/ProfileCmd/profile_not_create 0.73
87 TestFunctional/parallel/ProfileCmd/profile_list 0.63
88 TestFunctional/parallel/ProfileCmd/profile_json_output 0.74
92 TestFunctional/parallel/Version/short 0.1
93 TestFunctional/parallel/Version/components 1.69
94 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
95 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
96 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
97 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
98 TestFunctional/parallel/ImageCommands/ImageBuild 3.49
99 TestFunctional/parallel/ImageCommands/Setup 2.72
100 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.58
101 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.12
102 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.5
103 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
104 TestFunctional/parallel/ImageCommands/ImageRemove 0.62
105 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.98
106 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
107 TestFunctional/parallel/DockerEnv/bash 1.52
108 TestFunctional/parallel/UpdateContextCmd/no_changes 0.3
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
110 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.25
115 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
116 TestFunctional/delete_echo-server_images 0.06
117 TestFunctional/delete_my-image_image 0.03
118 TestFunctional/delete_minikube_cached_images 0.03
123 TestStartStop/group/cloud-shell/serial/FirstStart 78.65
124 TestStartStop/group/cloud-shell/serial/DeployApp 9.44
125 TestStartStop/group/cloud-shell/serial/EnableAddonWhileActive 1.32
126 TestStartStop/group/cloud-shell/serial/Stop 11.11
127 TestStartStop/group/cloud-shell/serial/EnableAddonAfterStop 0.28
128 TestStartStop/group/cloud-shell/serial/SecondStart 272.5
129 TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop 6.01
130 TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop 5.13
131 TestStartStop/group/cloud-shell/serial/VerifyKubernetesImages 0.32
132 TestStartStop/group/cloud-shell/serial/Pause 4.6
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.16s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-189999
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-189999: exit status 85 (161.550683ms)

                                                
                                                
-- stdout --
	* Profile "addons-189999" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-189999"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.16s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.16s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-189999
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-189999: exit status 85 (156.85491ms)

                                                
                                                
-- stdout --
	* Profile "addons-189999" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-189999"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.16s)

                                                
                                    
x
+
TestAddons/Setup (262.65s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-189999 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-189999 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (4m22.649396197s)
--- PASS: TestAddons/Setup (262.65s)

                                                
                                    
x
+
TestAddons/serial/Volcano (46.94s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 299.726841ms
addons_test.go:913: volcano-controller stabilized in 300.075613ms
addons_test.go:905: volcano-admission stabilized in 300.233321ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-mffb2" [cf791964-635a-4c29-9aeb-502e25b0e7c8] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.00537835s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-6ck7h" [a9356ead-ac03-4d79-840a-76abd7b24d93] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004076397s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-dlnlv" [979ce435-cbe2-4d05-b584-cf9f70074363] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005641135s
addons_test.go:932: (dbg) Run:  kubectl --context addons-189999 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-189999 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-189999 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [c4abe536-fbe1-4fbb-bf2b-008ead09c29a] Pending
helpers_test.go:344: "test-job-nginx-0" [c4abe536-fbe1-4fbb-bf2b-008ead09c29a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [c4abe536-fbe1-4fbb-bf2b-008ead09c29a] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 18.00482681s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-189999 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-189999 addons disable volcano --alsologtostderr -v=1: (11.076426054s)
--- PASS: TestAddons/serial/Volcano (46.94s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-189999 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-189999 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.26s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (23.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-189999 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-189999 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-189999 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [865692fc-0747-492c-966e-d262fe6c67f6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [865692fc-0747-492c-966e-d262fe6c67f6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.005913017s
I0919 18:56:01.954566    7874 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-189999 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-189999 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-189999 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-189999 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-189999 addons disable ingress-dns --alsologtostderr -v=1: (2.062780393s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-189999 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-189999 addons disable ingress --alsologtostderr -v=1: (8.588058889s)
--- PASS: TestAddons/parallel/Ingress (23.67s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.01s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ncsj9" [d0ef91b5-18ff-44b6-822d-ccea51ffe650] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.036931646s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-189999
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-189999: (6.967079176s)
--- PASS: TestAddons/parallel/InspektorGadget (12.01s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.98s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 11.03303ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-4p8l8" [f09b1fc6-b6de-47ef-834c-4d9b2f35aff0] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005410085s
addons_test.go:417: (dbg) Run:  kubectl --context addons-189999 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-189999 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.98s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.65s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 5.75296ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-lxzbk" [0c431a1d-0059-4b8e-897a-3feab83def78] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.004835093s
addons_test.go:475: (dbg) Run:  kubectl --context addons-189999 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-189999 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.84849012s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-189999 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.65s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 26.460515ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-189999 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-189999 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [742cc564-5db5-48b9-b255-bcc0fe1f10d7] Pending
helpers_test.go:344: "task-pv-pod" [742cc564-5db5-48b9-b255-bcc0fe1f10d7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [742cc564-5db5-48b9-b255-bcc0fe1f10d7] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004425804s
addons_test.go:590: (dbg) Run:  kubectl --context addons-189999 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-189999 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-189999 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-189999 delete pod task-pv-pod: (1.262180298s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-189999 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-189999 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-189999 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b8a72773-1c08-4ed0-8898-1e33103b917a] Pending
helpers_test.go:344: "task-pv-pod-restore" [b8a72773-1c08-4ed0-8898-1e33103b917a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b8a72773-1c08-4ed0-8898-1e33103b917a] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.008963184s
addons_test.go:632: (dbg) Run:  kubectl --context addons-189999 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-189999 delete pod task-pv-pod-restore: (1.759742506s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-189999 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-189999 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-189999 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-189999 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.266102763s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-189999 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-amd64 -p addons-189999 addons disable volumesnapshots --alsologtostderr -v=1: (1.544970991s)
--- PASS: TestAddons/parallel/CSI (49.21s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-189999 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-189999 --alsologtostderr -v=1: (1.252625849s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-sdxt5" [f8bd9ae9-19f8-49a1-a53d-7a21e2d665d0] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-sdxt5" [f8bd9ae9-19f8-49a1-a53d-7a21e2d665d0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-sdxt5" [f8bd9ae9-19f8-49a1-a53d-7a21e2d665d0] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.0046124s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-189999 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (14.55s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-rmzgb" [cd6e4009-a03d-46a7-84d5-0af25a777be1] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.007195165s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-189999
--- PASS: TestAddons/parallel/CloudSpanner (6.74s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.6s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-189999 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-189999 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-189999 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [e684cf2f-928c-4c06-bc7b-2f4f49901054] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [e684cf2f-928c-4c06-bc7b-2f4f49901054] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [e684cf2f-928c-4c06-bc7b-2f4f49901054] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004810094s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-189999 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-189999 ssh "cat /opt/local-path-provisioner/pvc-7cc8cbe4-e893-4189-9c9f-ee741c5ffab5_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-189999 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-189999 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-189999 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-189999 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.236346162s)
--- PASS: TestAddons/parallel/LocalPath (56.60s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-r9fh5" [5f43e6be-7391-496e-9c51-545bfef3ed7f] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005094175s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-189999
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-fwl6s" [a6c3b31f-4666-4159-949d-8e1e478fe084] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004577844s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-189999 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-189999 addons disable yakd --alsologtostderr -v=1: (5.885008776s)
--- PASS: TestAddons/parallel/Yakd (11.91s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.62s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-189999
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-189999: (11.187665322s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-189999
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-189999
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-189999
--- PASS: TestAddons/StoppedEnableDisable (11.62s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/files/etc/test/nested/copy/7874/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.06s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (74.93s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-548331 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-548331 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m14.908273139s)
--- PASS: TestFunctional/serial/StartWithProxy (74.93s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.68s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0919 18:58:24.516309    7874 config.go:182] Loaded profile config "functional-548331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-548331 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-548331 --alsologtostderr -v=8: (37.671706508s)
functional_test.go:663: soft start took 37.683704889s for "functional-548331" cluster.
I0919 18:59:02.188632    7874 config.go:182] Loaded profile config "functional-548331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (37.68s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.09s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-548331 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-548331 cache add registry.k8s.io/pause:3.3: (1.127920155s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-548331 /tmp/TestFunctionalserialCacheCmdcacheadd_local900111437/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 cache add minikube-local-cache-test:functional-548331
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 cache delete minikube-local-cache-test:functional-548331
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-548331
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (437.252159ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 kubectl -- --context functional-548331 get pods
functional_test.go:716: (dbg) Done: out/minikube-linux-amd64 -p functional-548331 kubectl -- --context functional-548331 get pods: (1.111529113s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-548331 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.34s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (53.33s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-548331 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-548331 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (53.319603793s)
functional_test.go:761: restart took 53.319791392s for "functional-548331" cluster.
I0919 19:00:04.303572    7874 config.go:182] Loaded profile config "functional-548331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (53.33s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-548331 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-548331 logs: (1.605779186s)
--- PASS: TestFunctional/serial/LogsCmd (1.61s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 logs --file /tmp/TestFunctionalserialLogsFileCmd2609406972/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-548331 logs --file /tmp/TestFunctionalserialLogsFileCmd2609406972/001/logs.txt: (1.643932723s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.65s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.32s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-548331 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-548331
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-548331: exit status 115 (656.47958ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32288 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_5b55102efd84289233ffc613c137836b410b4e4d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-548331 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-548331 delete -f testdata/invalidsvc.yaml: (1.325355368s)
--- PASS: TestFunctional/serial/InvalidService (5.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 config get cpus: exit status 14 (234.551251ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 config get cpus: exit status 14 (471.983961ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-548331 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-548331 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 46038: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.49s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-548331 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-548331 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (291.504481ms)

                                                
                                                
-- stdout --
	* [functional-548331] minikube v1.34.0 on Ubuntu 22.04 (amd64)
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19664-430/kubeconfig
	  - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19664-430/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_WANTUPDATENOTIFICATION=false
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:02:01.982030   45781 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:02:01.982304   45781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:02:01.982322   45781 out.go:358] Setting ErrFile to fd 2...
	I0919 19:02:01.982333   45781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:02:01.982805   45781 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/bin
	I0919 19:02:01.983614   45781 out.go:352] Setting JSON to false
	I0919 19:02:01.984905   45781 start.go:129] hostinfo: {"hostname":"cs-905301410258-default","uptime":2634,"bootTime":1726769888,"procs":91,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.1.100+","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"guest","hostId":"88b15d6b-fddc-40bb-b1ad-a90cb2566e38"}
	I0919 19:02:01.985046   45781 start.go:139] virtualization:  guest
	I0919 19:02:01.988888   45781 out.go:177] * [functional-548331] minikube v1.34.0 on Ubuntu 22.04 (amd64)
	I0919 19:02:01.992220   45781 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:02:01.992301   45781 notify.go:220] Checking for updates...
	I0919 19:02:01.999443   45781 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:02:02.003300   45781 out.go:177]   - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19664-430/kubeconfig
	I0919 19:02:02.008734   45781 out.go:177]   - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19664-430/.minikube
	I0919 19:02:02.015547   45781 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 19:02:02.018051   45781 out.go:177]   - MINIKUBE_WANTUPDATENOTIFICATION=false
	I0919 19:02:02.022076   45781 config.go:182] Loaded profile config "functional-548331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 19:02:02.023603   45781 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:02:02.066985   45781 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0919 19:02:02.067155   45781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 19:02:02.169722   45781 info.go:266] docker info: {ID:084b1885-1b65-4927-baf7-da2e440f52c1 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:false NGoroutines:58 SystemTime:2024-09-19 19:02:02.153407541 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337174528 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 19:02:02.169905   45781 docker.go:318] overlay module found
	I0919 19:02:02.173832   45781 out.go:177] * Using the docker driver based on existing profile
	I0919 19:02:02.176832   45781 start.go:297] selected driver: docker
	I0919 19:02:02.176870   45781 start.go:901] validating driver "docker" against &{Name:functional-548331 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-548331 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:02:02.177139   45781 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:02:02.180726   45781 out.go:201] 
	W0919 19:02:02.183390   45781 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0919 19:02:02.186137   45781 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-548331 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-548331 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-548331 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (359.808474ms)

                                                
                                                
-- stdout --
	* [functional-548331] minikube v1.34.0 sur Ubuntu 22.04 (amd64)
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19664-430/kubeconfig
	  - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19664-430/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_WANTUPDATENOTIFICATION=false
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:02:01.680319   45734 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:02:01.680577   45734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:02:01.680596   45734 out.go:358] Setting ErrFile to fd 2...
	I0919 19:02:01.680607   45734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:02:01.681126   45734 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/bin
	I0919 19:02:01.681788   45734 out.go:352] Setting JSON to false
	I0919 19:02:01.683256   45734 start.go:129] hostinfo: {"hostname":"cs-905301410258-default","uptime":2634,"bootTime":1726769888,"procs":91,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.1.100+","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"guest","hostId":"88b15d6b-fddc-40bb-b1ad-a90cb2566e38"}
	I0919 19:02:01.683352   45734 start.go:139] virtualization:  guest
	I0919 19:02:01.687285   45734 out.go:177] * [functional-548331] minikube v1.34.0 sur Ubuntu 22.04 (amd64)
	I0919 19:02:01.691040   45734 notify.go:220] Checking for updates...
	I0919 19:02:01.691235   45734 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:02:01.694223   45734 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:02:01.698314   45734 out.go:177]   - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19664-430/kubeconfig
	I0919 19:02:01.701811   45734 out.go:177]   - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19664-430/.minikube
	I0919 19:02:01.705004   45734 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 19:02:01.708697   45734 out.go:177]   - MINIKUBE_WANTUPDATENOTIFICATION=false
	I0919 19:02:01.714552   45734 config.go:182] Loaded profile config "functional-548331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 19:02:01.715275   45734 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:02:01.758103   45734 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0919 19:02:01.758281   45734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 19:02:01.873444   45734 info.go:266] docker info: {ID:084b1885-1b65-4927-baf7-da2e440f52c1 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:false NGoroutines:58 SystemTime:2024-09-19 19:02:01.856655903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337174528 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 19:02:01.873659   45734 docker.go:318] overlay module found
	I0919 19:02:01.878407   45734 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0919 19:02:01.881615   45734 start.go:297] selected driver: docker
	I0919 19:02:01.881646   45734 start.go:901] validating driver "docker" against &{Name:functional-548331 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-548331 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:02:01.881855   45734 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:02:01.886015   45734 out.go:201] 
	W0919 19:02:01.889753   45734 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0919 19:02:01.893244   45734 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-548331 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-548331 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-l5z2g" [3a9b136a-5d15-4c22-93df-53856dac3a12] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-l5z2g" [3a9b136a-5d15-4c22-93df-53856dac3a12] Running
E0919 19:01:02.567185    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.004506873s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31973
functional_test.go:1675: http://192.168.49.2:31973: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-l5z2g

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31973
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.96s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [fec7d503-d5cf-4e75-8788-d0e3d7cd7e3e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.008972455s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-548331 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-548331 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-548331 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-548331 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [620e042f-65b8-4fb9-bbe2-40b9c8bc8e92] Pending
helpers_test.go:344: "sp-pod" [620e042f-65b8-4fb9-bbe2-40b9c8bc8e92] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [620e042f-65b8-4fb9-bbe2-40b9c8bc8e92] Running
E0919 19:00:42.047349    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:00:42.076914    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:00:42.088432    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:00:42.109985    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:00:42.151494    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:00:42.232976    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:00:42.394630    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:00:42.716589    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.006772471s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-548331 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-548331 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-548331 delete -f testdata/storage-provisioner/pod.yaml: (1.09677347s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-548331 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d4c1b645-b04b-4ae9-af28-1eb4f632fee0] Pending
E0919 19:00:47.203040    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [d4c1b645-b04b-4ae9-af28-1eb4f632fee0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d4c1b645-b04b-4ae9-af28-1eb4f632fee0] Running
E0919 19:00:52.324905    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.005350003s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-548331 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.62s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (3.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "echo hello"
functional_test.go:1725: (dbg) Done: out/minikube-linux-amd64 -p functional-548331 ssh "echo hello": (1.674049132s)
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Done: out/minikube-linux-amd64 -p functional-548331 ssh "cat /etc/hostname": (1.794185145s)
--- PASS: TestFunctional/parallel/SSHCmd (3.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (10.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh -n functional-548331 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-linux-amd64 -p functional-548331 ssh -n functional-548331 "sudo cat /home/docker/cp-test.txt": (1.584601317s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 cp functional-548331:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1570099435/001/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-linux-amd64 -p functional-548331 cp functional-548331:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1570099435/001/cp-test.txt: (1.707571276s)
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh -n functional-548331 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-linux-amd64 -p functional-548331 ssh -n functional-548331 "sudo cat /home/docker/cp-test.txt": (1.617293161s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-linux-amd64 -p functional-548331 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: (3.842548664s)
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh -n functional-548331 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-linux-amd64 -p functional-548331 ssh -n functional-548331 "sudo cat /tmp/does/not/exist/cp-test.txt": (1.391539251s)
--- PASS: TestFunctional/parallel/CpCmd (10.95s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (35.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-548331 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-rztpl" [508a9ca8-ba7c-4f56-88ec-53c1fa169848] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-rztpl" [508a9ca8-ba7c-4f56-88ec-53c1fa169848] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 28.006652741s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-548331 exec mysql-6cdb49bbb-rztpl -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-548331 exec mysql-6cdb49bbb-rztpl -- mysql -ppassword -e "show databases;": exit status 1 (352.17592ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0919 19:03:05.110091    7874 retry.go:31] will retry after 1.107392231s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-548331 exec mysql-6cdb49bbb-rztpl -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-548331 exec mysql-6cdb49bbb-rztpl -- mysql -ppassword -e "show databases;": exit status 1 (354.932801ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0919 19:03:06.573177    7874 retry.go:31] will retry after 1.952334449s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-548331 exec mysql-6cdb49bbb-rztpl -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-548331 exec mysql-6cdb49bbb-rztpl -- mysql -ppassword -e "show databases;": exit status 1 (207.909753ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0919 19:03:08.734252    7874 retry.go:31] will retry after 2.533292418s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-548331 exec mysql-6cdb49bbb-rztpl -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (35.03s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7874/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "sudo cat /etc/test/nested/copy/7874/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7874.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "sudo cat /etc/ssl/certs/7874.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7874.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "sudo cat /usr/share/ca-certificates/7874.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/78742.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "sudo cat /etc/ssl/certs/78742.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/78742.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "sudo cat /usr/share/ca-certificates/78742.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-548331 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh "sudo systemctl is-active crio": exit status 1 (571.52328ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2288: (dbg) Done: out/minikube-linux-amd64 license: (1.489394219s)
--- PASS: TestFunctional/parallel/License (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (4.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-548331 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-548331 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-548331 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 41269: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-548331 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (4.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-548331 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-548331 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Done: kubectl --context functional-548331 apply -f testdata/testsvc.yaml: (1.178575942s)
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [238d02cd-c265-4611-9213-4d2798ef9d1c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [238d02cd-c265-4611-9213-4d2798ef9d1c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.007528421s
I0919 19:00:33.079211    7874 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-548331 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-548331 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-blh9k" [20d10400-a3c5-4de7-a896-00852c8f0501] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-blh9k" [20d10400-a3c5-4de7-a896-00852c8f0501] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.005290587s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 service list -o json
functional_test.go:1494: Took "709.650446ms" to run "out/minikube-linux-amd64 -p functional-548331 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30924
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30924
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "524.13733ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "101.673035ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "658.367372ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "83.522521ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-amd64 -p functional-548331 version -o=json --components: (1.694264258s)
--- PASS: TestFunctional/parallel/Version/components (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-548331 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-548331
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-548331
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-548331 image ls --format short --alsologtostderr:
I0919 19:03:13.683989   48625 out.go:345] Setting OutFile to fd 1 ...
I0919 19:03:13.684168   48625 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:03:13.684187   48625 out.go:358] Setting ErrFile to fd 2...
I0919 19:03:13.684196   48625 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:03:13.684475   48625 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/bin
I0919 19:03:13.685419   48625 config.go:182] Loaded profile config "functional-548331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 19:03:13.685697   48625 config.go:182] Loaded profile config "functional-548331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 19:03:13.686615   48625 cli_runner.go:164] Run: docker container inspect functional-548331 --format={{.State.Status}}
I0919 19:03:13.721419   48625 ssh_runner.go:195] Run: systemctl --version
I0919 19:03:13.721541   48625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-548331
I0919 19:03:13.759812   48625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/functional-548331/id_rsa Username:docker}
I0919 19:03:13.864427   48625 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-548331 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-548331 | 0ff6f92011d64 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| docker.io/kicbase/echo-server               | functional-548331 | 9056ab77afb8e | 4.94MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-548331 image ls --format table --alsologtostderr:
I0919 19:03:14.342171   48692 out.go:345] Setting OutFile to fd 1 ...
I0919 19:03:14.342448   48692 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:03:14.342465   48692 out.go:358] Setting ErrFile to fd 2...
I0919 19:03:14.342476   48692 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:03:14.342760   48692 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/bin
I0919 19:03:14.343756   48692 config.go:182] Loaded profile config "functional-548331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 19:03:14.344002   48692 config.go:182] Loaded profile config "functional-548331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 19:03:14.344611   48692 cli_runner.go:164] Run: docker container inspect functional-548331 --format={{.State.Status}}
I0919 19:03:14.374408   48692 ssh_runner.go:195] Run: systemctl --version
I0919 19:03:14.374513   48692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-548331
I0919 19:03:14.402299   48692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/functional-548331/id_rsa Username:docker}
I0919 19:03:14.511640   48692 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-548331 image ls --format json --alsologtostderr:
[{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"0ff6f92011d64a2945e3bbe2b5b094de176bc787691eddf9359c57d96c346e32","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-548331"],"size":"30"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c
86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-548331"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538
410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}
]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-548331 image ls --format json --alsologtostderr:
I0919 19:03:14.027030   48658 out.go:345] Setting OutFile to fd 1 ...
I0919 19:03:14.027267   48658 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:03:14.027288   48658 out.go:358] Setting ErrFile to fd 2...
I0919 19:03:14.027297   48658 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:03:14.027591   48658 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/bin
I0919 19:03:14.028493   48658 config.go:182] Loaded profile config "functional-548331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 19:03:14.028730   48658 config.go:182] Loaded profile config "functional-548331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 19:03:14.029514   48658 cli_runner.go:164] Run: docker container inspect functional-548331 --format={{.State.Status}}
I0919 19:03:14.056606   48658 ssh_runner.go:195] Run: systemctl --version
I0919 19:03:14.056747   48658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-548331
I0919 19:03:14.086251   48658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/functional-548331/id_rsa Username:docker}
I0919 19:03:14.190988   48658 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-548331 image ls --format yaml --alsologtostderr:
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 0ff6f92011d64a2945e3bbe2b5b094de176bc787691eddf9359c57d96c346e32
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-548331
size: "30"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-548331
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-548331 image ls --format yaml --alsologtostderr:
I0919 19:03:13.380253   48592 out.go:345] Setting OutFile to fd 1 ...
I0919 19:03:13.380510   48592 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:03:13.380572   48592 out.go:358] Setting ErrFile to fd 2...
I0919 19:03:13.380604   48592 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:03:13.380958   48592 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/bin
I0919 19:03:13.381996   48592 config.go:182] Loaded profile config "functional-548331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 19:03:13.382244   48592 config.go:182] Loaded profile config "functional-548331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 19:03:13.382923   48592 cli_runner.go:164] Run: docker container inspect functional-548331 --format={{.State.Status}}
I0919 19:03:13.410337   48592 ssh_runner.go:195] Run: systemctl --version
I0919 19:03:13.410447   48592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-548331
I0919 19:03:13.442263   48592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/functional-548331/id_rsa Username:docker}
I0919 19:03:13.547737   48592 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-548331 ssh pgrep buildkitd: exit status 1 (461.787742ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 image build -t localhost/my-image:functional-548331 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-548331 image build -t localhost/my-image:functional-548331 testdata/build --alsologtostderr: (2.714445662s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-548331 image build -t localhost/my-image:functional-548331 testdata/build --alsologtostderr:
I0919 19:03:15.105937   48785 out.go:345] Setting OutFile to fd 1 ...
I0919 19:03:15.106288   48785 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:03:15.106344   48785 out.go:358] Setting ErrFile to fd 2...
I0919 19:03:15.106379   48785 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:03:15.106707   48785 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19664-430/.minikube/bin
I0919 19:03:15.107554   48785 config.go:182] Loaded profile config "functional-548331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 19:03:15.172442   48785 config.go:182] Loaded profile config "functional-548331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 19:03:15.173736   48785 cli_runner.go:164] Run: docker container inspect functional-548331 --format={{.State.Status}}
I0919 19:03:15.202759   48785 ssh_runner.go:195] Run: systemctl --version
I0919 19:03:15.202939   48785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-548331
I0919 19:03:15.232633   48785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19664-430/.minikube/machines/functional-548331/id_rsa Username:docker}
I0919 19:03:15.336755   48785 build_images.go:161] Building image from path: /tmp/build.272972373.tar
I0919 19:03:15.336936   48785 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0919 19:03:15.352929   48785 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.272972373.tar
I0919 19:03:15.359407   48785 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.272972373.tar: stat -c "%s %y" /var/lib/minikube/build/build.272972373.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.272972373.tar': No such file or directory
I0919 19:03:15.359453   48785 ssh_runner.go:362] scp /tmp/build.272972373.tar --> /var/lib/minikube/build/build.272972373.tar (3072 bytes)
I0919 19:03:15.403875   48785 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.272972373
I0919 19:03:15.419961   48785 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.272972373 -xf /var/lib/minikube/build/build.272972373.tar
I0919 19:03:15.438070   48785 docker.go:360] Building image: /var/lib/minikube/build/build.272972373
I0919 19:03:15.438319   48785 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-548331 /var/lib/minikube/build/build.272972373
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:42050abfd1ffadafee375272b3020ada4912828409a92649cc1feeac0cd726bf done
#8 naming to localhost/my-image:functional-548331 done
#8 DONE 0.1s
I0919 19:03:17.694583   48785 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-548331 /var/lib/minikube/build/build.272972373: (2.25616106s)
I0919 19:03:17.694768   48785 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.272972373
I0919 19:03:17.711632   48785 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.272972373.tar
I0919 19:03:17.727557   48785 build_images.go:217] Built localhost/my-image:functional-548331 from /tmp/build.272972373.tar
I0919 19:03:17.727604   48785 build_images.go:133] succeeded building to: functional-548331
I0919 19:03:17.727612   48785 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.681598029s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-548331
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 image load --daemon kicbase/echo-server:functional-548331 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-548331 image load --daemon kicbase/echo-server:functional-548331 --alsologtostderr: (1.211339283s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 image load --daemon kicbase/echo-server:functional-548331 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Done: docker pull kicbase/echo-server:latest: (1.270896962s)
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-548331
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 image load --daemon kicbase/echo-server:functional-548331 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 image save kicbase/echo-server:functional-548331 /home/g528047478195_compute/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 image rm kicbase/echo-server:functional-548331 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 image load /home/g528047478195_compute/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-548331
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 image save --daemon kicbase/echo-server:functional-548331 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-548331
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-548331 docker-env) && out/minikube-linux-amd64 status -p functional-548331"
functional_test.go:499: (dbg) Done: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-548331 docker-env) && out/minikube-linux-amd64 status -p functional-548331": (1.018095053s)
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-548331 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-548331 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-548331 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-548331
--- PASS: TestFunctional/delete_echo-server_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-548331
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-548331
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/FirstStart (78.65s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p cloud-shell-066671 --memory=2200 --alsologtostderr --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0919 19:05:22.884157    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/functional-548331/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:05:22.892193    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/functional-548331/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:05:22.903640    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/functional-548331/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:05:22.925160    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/functional-548331/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:05:22.966611    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/functional-548331/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:05:23.048042    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/functional-548331/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:05:23.209580    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/functional-548331/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:05:23.531082    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/functional-548331/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:05:24.173095    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/functional-548331/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:05:25.454511    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/functional-548331/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:05:28.017103    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/functional-548331/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:05:33.139464    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/functional-548331/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:05:41.892561    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:05:43.381504    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/functional-548331/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:06:03.863094    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/functional-548331/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:06:09.776409    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p cloud-shell-066671 --memory=2200 --alsologtostderr --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m18.650044249s)
--- PASS: TestStartStop/group/cloud-shell/serial/FirstStart (78.65s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context cloud-shell-066671 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/cloud-shell/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f9df0c1c-a1a7-4d04-8997-b3c51bea7be8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f9df0c1c-a1a7-4d04-8997-b3c51bea7be8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/cloud-shell/serial/DeployApp: integration-test=busybox healthy within 9.004691105s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context cloud-shell-066671 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/cloud-shell/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p cloud-shell-066671 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p cloud-shell-066671 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.187222302s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context cloud-shell-066671 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/cloud-shell/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/Stop (11.11s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p cloud-shell-066671 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p cloud-shell-066671 --alsologtostderr -v=3: (11.106493162s)
--- PASS: TestStartStop/group/cloud-shell/serial/Stop (11.11s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p cloud-shell-066671 -n cloud-shell-066671
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p cloud-shell-066671 -n cloud-shell-066671: exit status 7 (126.686494ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p cloud-shell-066671 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/cloud-shell/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/SecondStart (272.5s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p cloud-shell-066671 --memory=2200 --alsologtostderr --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0919 19:06:44.825006    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/functional-548331/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:08:06.748225    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/functional-548331/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:10:22.883312    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/functional-548331/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:10:41.892546    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/addons-189999/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:10:50.590078    7874 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19664-430/.minikube/profiles/functional-548331/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p cloud-shell-066671 --memory=2200 --alsologtostderr --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m31.843956982s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p cloud-shell-066671 -n cloud-shell-066671
--- PASS: TestStartStop/group/cloud-shell/serial/SecondStart (272.50s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mlr9c" [2b9b2e50-f4eb-41d2-bfb7-5d342c203d30] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005186192s
--- PASS: TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mlr9c" [2b9b2e50-f4eb-41d2-bfb7-5d342c203d30] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005566542s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context cloud-shell-066671 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p cloud-shell-066671 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/cloud-shell/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/Pause (4.6s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p cloud-shell-066671 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p cloud-shell-066671 -n cloud-shell-066671
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p cloud-shell-066671 -n cloud-shell-066671: exit status 2 (466.873837ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p cloud-shell-066671 -n cloud-shell-066671
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p cloud-shell-066671 -n cloud-shell-066671: exit status 2 (456.717911ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p cloud-shell-066671 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p cloud-shell-066671 -n cloud-shell-066671
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 status --format={{.APIServer}} -p cloud-shell-066671 -n cloud-shell-066671: (1.057023735s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p cloud-shell-066671 -n cloud-shell-066671
--- PASS: TestStartStop/group/cloud-shell/serial/Pause (4.60s)

                                                
                                    

Test skip (5/108)

x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
Copied to clipboard