Test Report: Docker_Cloud_Shell 19690

                    
                      f8db61c9b74e1fc8d4208c01add19855c5953b45:2024-09-23:36339
                    
                

Test fail (6/107)

x
+
TestAddons/parallel/Registry (76.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 4.300144ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-gs28r" [a4004cb2-7560-45d6-957e-58b28943f86e] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007328805s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-64jpr" [54e0a75a-3fc8-4445-922b-6a5f4489144e] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005516738s
addons_test.go:338: (dbg) Run:  kubectl --context addons-785680 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-785680 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-785680 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.166385218s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-785680 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-785680 ip
2024/09/23 12:27:10 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-785680 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-785680
helpers_test.go:235: (dbg) docker inspect addons-785680:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bcfa1e631e1d24d11dab71d22f52a11566de3c36de0c302295dc7bd046f260aa",
	        "Created": "2024-09-23T12:14:36.417066701Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 257789,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T12:14:36.5847802Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d94335c0cd164ddebb3c5158e317bcf6d2e08dc08f448d25251f425acb842829",
	        "ResolvConfPath": "/var/lib/docker/containers/bcfa1e631e1d24d11dab71d22f52a11566de3c36de0c302295dc7bd046f260aa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bcfa1e631e1d24d11dab71d22f52a11566de3c36de0c302295dc7bd046f260aa/hostname",
	        "HostsPath": "/var/lib/docker/containers/bcfa1e631e1d24d11dab71d22f52a11566de3c36de0c302295dc7bd046f260aa/hosts",
	        "LogPath": "/var/lib/docker/containers/bcfa1e631e1d24d11dab71d22f52a11566de3c36de0c302295dc7bd046f260aa/bcfa1e631e1d24d11dab71d22f52a11566de3c36de0c302295dc7bd046f260aa-json.log",
	        "Name": "/addons-785680",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-785680:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-785680",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d25e281b2a0ce8142e3539a114ec30b2913af23e33c60355c7f59fc3bbad6595-init/diff:/var/lib/docker/overlay2/c0bf08eecdedb28bfcf4dadedc4da0da25b175652a94cb9dadefe7e23e5ae06c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d25e281b2a0ce8142e3539a114ec30b2913af23e33c60355c7f59fc3bbad6595/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d25e281b2a0ce8142e3539a114ec30b2913af23e33c60355c7f59fc3bbad6595/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d25e281b2a0ce8142e3539a114ec30b2913af23e33c60355c7f59fc3bbad6595/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-785680",
	                "Source": "/var/lib/docker/volumes/addons-785680/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-785680",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-785680",
	                "name.minikube.sigs.k8s.io": "addons-785680",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d8daa9705bf7fcd01d531764f88e917eabfaf20a60321086505f1ca661dd9b12",
	            "SandboxKey": "/var/run/docker/netns/d8daa9705bf7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32848"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32849"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32852"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32850"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32851"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-785680": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "aa404c84e1b12a7c86c53cd9b7ef9a54544c46c9efabcb1147feebf309d7b105",
	                    "EndpointID": "20d9b6629c1a2022c8f03a5c904fafd1ffcb9384e5cf3a26c9b4c0defcb7d861",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-785680",
	                        "bcfa1e631e1d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-785680 -n addons-785680
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-785680 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-785680 logs -n 25: (1.751244362s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|---------------|-----------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |    Profile    |         User          | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|---------------|-----------------------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p                                                                         | addons-785680 | g528047478195_compute | v1.34.0 | 23 Sep 24 12:13 UTC |                     |
	|         | addons-785680                                                                               |               |                       |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-785680 | g528047478195_compute | v1.34.0 | 23 Sep 24 12:13 UTC |                     |
	|         | addons-785680                                                                               |               |                       |         |                     |                     |
	| start   | -p addons-785680 --wait=true                                                                | addons-785680 | g528047478195_compute | v1.34.0 | 23 Sep 24 12:13 UTC | 23 Sep 24 12:17 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |               |                       |         |                     |                     |
	|         | --addons=registry                                                                           |               |                       |         |                     |                     |
	|         | --addons=metrics-server                                                                     |               |                       |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |               |                       |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |               |                       |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |               |                       |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |               |                       |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |               |                       |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |               |                       |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |               |                       |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |               |                       |         |                     |                     |
	|         | --driver=docker                                                                             |               |                       |         |                     |                     |
	|         | --container-runtime=docker                                                                  |               |                       |         |                     |                     |
	|         | --addons=ingress                                                                            |               |                       |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |               |                       |         |                     |                     |
	| addons  | addons-785680 addons disable                                                                | addons-785680 | g528047478195_compute | v1.34.0 | 23 Sep 24 12:17 UTC | 23 Sep 24 12:17 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |               |                       |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-785680 | g528047478195_compute | v1.34.0 | 23 Sep 24 12:25 UTC | 23 Sep 24 12:26 UTC |
	|         | -p addons-785680                                                                            |               |                       |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |               |                       |         |                     |                     |
	| addons  | addons-785680 addons disable                                                                | addons-785680 | g528047478195_compute | v1.34.0 | 23 Sep 24 12:26 UTC | 23 Sep 24 12:26 UTC |
	|         | headlamp --alsologtostderr                                                                  |               |                       |         |                     |                     |
	|         | -v=1                                                                                        |               |                       |         |                     |                     |
	| addons  | addons-785680 addons disable                                                                | addons-785680 | g528047478195_compute | v1.34.0 | 23 Sep 24 12:26 UTC | 23 Sep 24 12:26 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |               |                       |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-785680 | g528047478195_compute | v1.34.0 | 23 Sep 24 12:26 UTC | 23 Sep 24 12:26 UTC |
	|         | -p addons-785680                                                                            |               |                       |         |                     |                     |
	| ssh     | addons-785680 ssh cat                                                                       | addons-785680 | g528047478195_compute | v1.34.0 | 23 Sep 24 12:26 UTC | 23 Sep 24 12:26 UTC |
	|         | /opt/local-path-provisioner/pvc-d6f18bb2-3816-44db-9014-d267a08bbb45_default_test-pvc/file1 |               |                       |         |                     |                     |
	| addons  | addons-785680 addons disable                                                                | addons-785680 | g528047478195_compute | v1.34.0 | 23 Sep 24 12:26 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |               |                       |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |               |                       |         |                     |                     |
	| ip      | addons-785680 ip                                                                            | addons-785680 | g528047478195_compute | v1.34.0 | 23 Sep 24 12:27 UTC | 23 Sep 24 12:27 UTC |
	| addons  | addons-785680 addons disable                                                                | addons-785680 | g528047478195_compute | v1.34.0 | 23 Sep 24 12:27 UTC | 23 Sep 24 12:27 UTC |
	|         | registry --alsologtostderr                                                                  |               |                       |         |                     |                     |
	|         | -v=1                                                                                        |               |                       |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|---------------|-----------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 12:13:47
	Running on machine: cs-905301410258-default
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 12:13:47.657884  257312 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:13:47.658087  257312 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:13:47.658105  257312 out.go:358] Setting ErrFile to fd 2...
	I0923 12:13:47.658114  257312 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:13:47.658404  257312 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/bin
	W0923 12:13:47.658652  257312 root.go:314] Error reading config file at /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/config/config.json: open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/config/config.json: no such file or directory
	I0923 12:13:47.659182  257312 out.go:352] Setting JSON to false
	I0923 12:13:47.660143  257312 start.go:129] hostinfo: {"hostname":"cs-905301410258-default","uptime":23004,"bootTime":1727070623,"procs":20,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.1.100+","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"guest","hostId":"88b15d6b-fddc-40bb-b1ad-a90cb2566e38"}
	I0923 12:13:47.660268  257312 start.go:139] virtualization:  guest
	I0923 12:13:47.665175  257312 out.go:177] * [addons-785680] minikube v1.34.0 on Ubuntu 22.04 (amd64)
	W0923 12:13:47.669162  257312 preload.go:293] Failed to list preload files: open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 12:13:47.669221  257312 notify.go:220] Checking for updates...
	I0923 12:13:47.669261  257312 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:13:47.672204  257312 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:13:47.675127  257312 out.go:177]   - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19690-251237/kubeconfig
	I0923 12:13:47.678289  257312 out.go:177]   - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19690-251237/.minikube
	I0923 12:13:47.681588  257312 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 12:13:47.684404  257312 out.go:177]   - MINIKUBE_WANTUPDATENOTIFICATION=false
	I0923 12:13:47.687913  257312 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:13:47.731227  257312 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0923 12:13:47.731466  257312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 12:13:47.831568  257312 info.go:266] docker info: {ID:8c091e5d-c8d2-4ae9-9a43-fbe0c7b936d8 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:false NGoroutines:55 SystemTime:2024-09-23 12:13:47.813726862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 12:13:47.831759  257312 docker.go:318] overlay module found
	I0923 12:13:47.835327  257312 out.go:177] * Using the docker driver based on user configuration
	I0923 12:13:47.839218  257312 start.go:297] selected driver: docker
	I0923 12:13:47.839261  257312 start.go:901] validating driver "docker" against <nil>
	I0923 12:13:47.839283  257312 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:13:47.840037  257312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 12:13:47.927554  257312 info.go:266] docker info: {ID:8c091e5d-c8d2-4ae9-9a43-fbe0c7b936d8 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:false NGoroutines:55 SystemTime:2024-09-23 12:13:47.906113306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 12:13:47.927829  257312 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 12:13:47.928243  257312 start_flags.go:421] setting extra-config: kubelet.cgroups-per-qos=false
	I0923 12:13:47.928264  257312 start_flags.go:421] setting extra-config: kubelet.enforce-node-allocatable=""
	I0923 12:13:47.928367  257312 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:13:47.931357  257312 out.go:177] * Using Docker driver with root privileges
	I0923 12:13:47.934224  257312 cni.go:84] Creating CNI manager for ""
	I0923 12:13:47.934396  257312 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 12:13:47.934419  257312 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 12:13:47.934569  257312 start.go:340] cluster config:
	{Name:addons-785680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-785680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:13:47.937814  257312 out.go:177] * Starting "addons-785680" primary control-plane node in "addons-785680" cluster
	I0923 12:13:47.940501  257312 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 12:13:47.946280  257312 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 12:13:47.949483  257312 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:13:47.949610  257312 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 12:13:47.972911  257312 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 12:13:47.972949  257312 cache.go:56] Caching tarball of preloaded images
	I0923 12:13:47.973390  257312 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:13:47.976186  257312 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 12:13:47.976612  257312 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 12:13:47.976783  257312 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 12:13:47.977405  257312 out.go:177] * Downloading Kubernetes v1.31.1 preload ...
	I0923 12:13:47.980716  257312 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0923 12:13:48.039288  257312 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 12:13:51.479042  257312 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0923 12:13:51.479236  257312 preload.go:254] verifying checksum of /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0923 12:13:52.925137  257312 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 12:13:52.925757  257312 profile.go:143] Saving config to /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/config.json ...
	I0923 12:13:52.925815  257312 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/config.json: {Name:mk43cebf447d5447ab6ed1b406d0d6499318a980 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:13:58.558502  257312 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 12:13:58.558526  257312 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0923 12:14:23.620191  257312 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0923 12:14:23.620256  257312 cache.go:194] Successfully downloaded all kic artifacts
	I0923 12:14:23.620363  257312 start.go:360] acquireMachinesLock for addons-785680: {Name:mkac38df09490c9c570bde087623d194fd34f742 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:14:23.620734  257312 start.go:364] duration metric: took 325.562µs to acquireMachinesLock for "addons-785680"
	I0923 12:14:23.620789  257312 start.go:93] Provisioning new machine with config: &{Name:addons-785680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-785680 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:14:23.620928  257312 start.go:125] createHost starting for "" (driver="docker")
	I0923 12:14:23.625454  257312 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0923 12:14:23.625933  257312 start.go:159] libmachine.API.Create for "addons-785680" (driver="docker")
	I0923 12:14:23.625981  257312 client.go:168] LocalClient.Create starting
	I0923 12:14:23.626131  257312 main.go:141] libmachine: Creating CA: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/certs/ca.pem
	I0923 12:14:23.917423  257312 main.go:141] libmachine: Creating client certificate: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/certs/cert.pem
	I0923 12:14:24.049812  257312 cli_runner.go:164] Run: docker network inspect addons-785680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 12:14:24.074622  257312 cli_runner.go:211] docker network inspect addons-785680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 12:14:24.074754  257312 network_create.go:284] running [docker network inspect addons-785680] to gather additional debugging logs...
	I0923 12:14:24.074793  257312 cli_runner.go:164] Run: docker network inspect addons-785680
	W0923 12:14:24.097120  257312 cli_runner.go:211] docker network inspect addons-785680 returned with exit code 1
	I0923 12:14:24.097182  257312 network_create.go:287] error running [docker network inspect addons-785680]: docker network inspect addons-785680: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-785680 not found
	I0923 12:14:24.097210  257312 network_create.go:289] output of [docker network inspect addons-785680]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-785680 not found
	
	** /stderr **
	I0923 12:14:24.097393  257312 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 12:14:24.122343  257312 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f8ec80}
	I0923 12:14:24.122395  257312 network_create.go:124] attempt to create docker network addons-785680 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1460 ...
	I0923 12:14:24.122506  257312 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1460 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-785680 addons-785680
	I0923 12:14:24.222128  257312 network_create.go:108] docker network addons-785680 192.168.49.0/24 created
	I0923 12:14:24.222173  257312 kic.go:121] calculated static IP "192.168.49.2" for the "addons-785680" container
	I0923 12:14:24.222345  257312 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 12:14:24.248738  257312 cli_runner.go:164] Run: docker volume create addons-785680 --label name.minikube.sigs.k8s.io=addons-785680 --label created_by.minikube.sigs.k8s.io=true
	I0923 12:14:24.276726  257312 oci.go:103] Successfully created a docker volume addons-785680
	I0923 12:14:24.276872  257312 cli_runner.go:164] Run: docker run --rm --name addons-785680-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-785680 --entrypoint /usr/bin/test -v addons-785680:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 12:14:28.581250  257312 cli_runner.go:217] Completed: docker run --rm --name addons-785680-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-785680 --entrypoint /usr/bin/test -v addons-785680:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (4.304305815s)
	I0923 12:14:28.581288  257312 oci.go:107] Successfully prepared a docker volume addons-785680
	I0923 12:14:28.581334  257312 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:14:28.581367  257312 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 12:14:28.581486  257312 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-785680:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 12:14:36.306029  257312 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-785680:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (7.72448094s)
	I0923 12:14:36.306080  257312 kic.go:203] duration metric: took 7.72470887s to extract preloaded images to volume ...
	W0923 12:14:36.306202  257312 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0923 12:14:36.306270  257312 oci.go:243] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0923 12:14:36.306404  257312 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0923 12:14:36.392130  257312 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-785680 --name addons-785680 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-785680 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-785680 --network addons-785680 --ip 192.168.49.2 --volume addons-785680:/var --security-opt apparmor=unconfined --memory=4000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0923 12:14:36.790106  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Running}}
	I0923 12:14:36.836639  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Status}}
	I0923 12:14:36.879019  257312 cli_runner.go:164] Run: docker exec addons-785680 stat /var/lib/dpkg/alternatives/iptables
	I0923 12:14:37.025490  257312 oci.go:144] the created container "addons-785680" has a running status.
	I0923 12:14:37.025531  257312 kic.go:225] Creating ssh key for kic: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa...
	I0923 12:14:38.043496  257312 kic_runner.go:191] docker (temp): /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0923 12:14:38.090268  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Status}}
	I0923 12:14:38.128480  257312 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0923 12:14:38.128513  257312 kic_runner.go:114] Args: [docker exec --privileged addons-785680 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0923 12:14:38.220125  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Status}}
	I0923 12:14:38.261909  257312 machine.go:93] provisionDockerMachine start ...
	I0923 12:14:38.262073  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:14:38.300454  257312 main.go:141] libmachine: Using SSH client type: native
	I0923 12:14:38.300874  257312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0923 12:14:38.300902  257312 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 12:14:38.475780  257312 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-785680
	
	I0923 12:14:38.475822  257312 ubuntu.go:169] provisioning hostname "addons-785680"
	I0923 12:14:38.475965  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:14:38.504546  257312 main.go:141] libmachine: Using SSH client type: native
	I0923 12:14:38.504846  257312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0923 12:14:38.504870  257312 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-785680 && echo "addons-785680" | sudo tee /etc/hostname
	I0923 12:14:38.678712  257312 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-785680
	
	I0923 12:14:38.678935  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:14:38.712111  257312 main.go:141] libmachine: Using SSH client type: native
	I0923 12:14:38.712440  257312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0923 12:14:38.712520  257312 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-785680' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-785680/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-785680' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:14:38.859928  257312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:14:38.859985  257312 ubuntu.go:175] set auth options {CertDir:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube CaCertPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/server.pem ServerKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/server-key.pem ClientKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube}
	I0923 12:14:38.860019  257312 ubuntu.go:177] setting up certificates
	I0923 12:14:38.860046  257312 provision.go:84] configureAuth start
	I0923 12:14:38.860161  257312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-785680
	I0923 12:14:38.889767  257312 provision.go:143] copyHostCerts
	I0923 12:14:38.889874  257312 exec_runner.go:151] cp: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/certs/ca.pem --> /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/ca.pem (1119 bytes)
	I0923 12:14:38.890086  257312 exec_runner.go:151] cp: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/certs/cert.pem --> /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/cert.pem (1164 bytes)
	I0923 12:14:38.890241  257312 exec_runner.go:151] cp: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/certs/key.pem --> /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/key.pem (1679 bytes)
	I0923 12:14:38.890411  257312 provision.go:117] generating server cert: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/server.pem ca-key=/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/certs/ca.pem private-key=/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/certs/ca-key.pem org=g528047478195_compute.addons-785680 san=[127.0.0.1 192.168.49.2 addons-785680 localhost minikube]
	I0923 12:14:39.369703  257312 provision.go:177] copyRemoteCerts
	I0923 12:14:39.369889  257312 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:14:39.369994  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:14:39.397355  257312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa Username:docker}
	I0923 12:14:39.502988  257312 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1119 bytes)
	I0923 12:14:39.544426  257312 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/server.pem --> /etc/docker/server.pem (1245 bytes)
	I0923 12:14:39.581640  257312 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 12:14:39.618072  257312 provision.go:87] duration metric: took 757.998853ms to configureAuth
	I0923 12:14:39.618106  257312 ubuntu.go:193] setting minikube options for container-runtime
	I0923 12:14:39.618455  257312 config.go:182] Loaded profile config "addons-785680": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:14:39.618597  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:14:39.647504  257312 main.go:141] libmachine: Using SSH client type: native
	I0923 12:14:39.647864  257312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0923 12:14:39.647887  257312 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 12:14:39.795291  257312 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0923 12:14:39.795343  257312 ubuntu.go:71] root file system type: overlay
	I0923 12:14:39.795538  257312 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 12:14:39.795669  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:14:39.827803  257312 main.go:141] libmachine: Using SSH client type: native
	I0923 12:14:39.828123  257312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0923 12:14:39.828245  257312 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 12:14:39.992829  257312 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 12:14:39.993003  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:14:40.022221  257312 main.go:141] libmachine: Using SSH client type: native
	I0923 12:14:40.022677  257312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0923 12:14:40.022731  257312 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 12:14:41.148774  257312 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-19 14:24:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-23 12:14:39.989925375 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0923 12:14:41.148848  257312 machine.go:96] duration metric: took 2.886911848s to provisionDockerMachine
	I0923 12:14:41.148868  257312 client.go:171] duration metric: took 17.522876294s to LocalClient.Create
	I0923 12:14:41.148900  257312 start.go:167] duration metric: took 17.522969352s to libmachine.API.Create "addons-785680"
	I0923 12:14:41.148914  257312 start.go:293] postStartSetup for "addons-785680" (driver="docker")
	I0923 12:14:41.148933  257312 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:14:41.149052  257312 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:14:41.149135  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:14:41.176712  257312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa Username:docker}
	I0923 12:14:41.281159  257312 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:14:41.286384  257312 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 12:14:41.286441  257312 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 12:14:41.286465  257312 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 12:14:41.286476  257312 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 12:14:41.286491  257312 filesync.go:126] Scanning /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/addons for local assets ...
	I0923 12:14:41.286576  257312 filesync.go:126] Scanning /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/files for local assets ...
	I0923 12:14:41.286625  257312 start.go:296] duration metric: took 137.699697ms for postStartSetup
	I0923 12:14:41.287190  257312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-785680
	I0923 12:14:41.314890  257312 profile.go:143] Saving config to /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/config.json ...
	I0923 12:14:41.315372  257312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 12:14:41.315476  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:14:41.341531  257312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa Username:docker}
	I0923 12:14:41.439708  257312 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 12:14:41.447058  257312 start.go:128] duration metric: took 17.826105098s to createHost
	I0923 12:14:41.447219  257312 start.go:83] releasing machines lock for "addons-785680", held for 17.826456898s
	I0923 12:14:41.447404  257312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-785680
	I0923 12:14:41.474942  257312 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 12:14:41.474959  257312 ssh_runner.go:195] Run: cat /version.json
	I0923 12:14:41.475047  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:14:41.475088  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:14:41.510451  257312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa Username:docker}
	I0923 12:14:41.520967  257312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa Username:docker}
	I0923 12:14:41.753860  257312 ssh_runner.go:195] Run: systemctl --version
	I0923 12:14:41.761385  257312 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 12:14:41.768833  257312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0923 12:14:41.817800  257312 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0923 12:14:41.818045  257312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:14:41.862159  257312 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:14:41.862232  257312 start.go:495] detecting cgroup driver to use...
	I0923 12:14:41.862281  257312 detect.go:190] detected "systemd" cgroup driver on host os
	I0923 12:14:41.862549  257312 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:14:41.889155  257312 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 12:14:41.904224  257312 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 12:14:41.919772  257312 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0923 12:14:41.920022  257312 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0923 12:14:41.935633  257312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:14:41.950954  257312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 12:14:41.966393  257312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:14:41.981941  257312 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:14:41.996089  257312 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 12:14:42.011441  257312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 12:14:42.026960  257312 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 12:14:42.042264  257312 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:14:42.056681  257312 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:14:42.070586  257312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:14:42.199821  257312 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 12:14:42.317271  257312 start.go:495] detecting cgroup driver to use...
	I0923 12:14:42.317371  257312 detect.go:190] detected "systemd" cgroup driver on host os
	I0923 12:14:42.317483  257312 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 12:14:42.345212  257312 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0923 12:14:42.345355  257312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:14:42.374017  257312 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:14:42.410228  257312 ssh_runner.go:195] Run: which cri-dockerd
	I0923 12:14:42.418033  257312 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 12:14:42.438779  257312 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 12:14:42.478603  257312 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 12:14:42.713504  257312 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 12:14:42.922103  257312 docker.go:574] configuring docker to use "systemd" as cgroup driver...
	I0923 12:14:42.922272  257312 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0923 12:14:42.951137  257312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:14:43.082799  257312 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 12:14:43.505641  257312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 12:14:43.524456  257312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 12:14:43.545852  257312 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 12:14:43.681622  257312 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 12:14:43.810025  257312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:14:43.942351  257312 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 12:14:43.973595  257312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 12:14:43.991692  257312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:14:44.123408  257312 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 12:14:44.225691  257312 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 12:14:44.225826  257312 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 12:14:44.234722  257312 start.go:563] Will wait 60s for crictl version
	I0923 12:14:44.234852  257312 ssh_runner.go:195] Run: which crictl
	I0923 12:14:44.242126  257312 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:14:44.299095  257312 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 12:14:44.299215  257312 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:14:44.344734  257312 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:14:44.388701  257312 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 12:14:44.388887  257312 cli_runner.go:164] Run: docker network inspect addons-785680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 12:14:44.416192  257312 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0923 12:14:44.421815  257312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:14:44.441986  257312 out.go:177]   - kubelet.cgroups-per-qos=false
	I0923 12:14:44.444762  257312 out.go:177]   - kubelet.enforce-node-allocatable=""
	I0923 12:14:44.447797  257312 kubeadm.go:883] updating cluster {Name:addons-785680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-785680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 12:14:44.447992  257312 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:14:44.448162  257312 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 12:14:44.478600  257312 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 12:14:44.478633  257312 docker.go:615] Images already preloaded, skipping extraction
	I0923 12:14:44.478823  257312 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 12:14:44.509984  257312 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 12:14:44.510025  257312 cache_images.go:84] Images are preloaded, skipping loading
	I0923 12:14:44.510042  257312 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0923 12:14:44.510208  257312 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable="" --hostname-override=addons-785680 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-785680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:14:44.510361  257312 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 12:14:44.582384  257312 cni.go:84] Creating CNI manager for ""
	I0923 12:14:44.582447  257312 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 12:14:44.582466  257312 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 12:14:44.582531  257312 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-785680 NodeName:addons-785680 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 12:14:44.582886  257312 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-785680"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 12:14:44.583135  257312 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:14:44.598617  257312 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 12:14:44.598765  257312 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 12:14:44.613845  257312 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (366 bytes)
	I0923 12:14:44.643082  257312 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:14:44.672295  257312 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0923 12:14:44.701658  257312 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 12:14:44.707375  257312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:14:44.725864  257312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:14:44.855865  257312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:14:44.884463  257312 certs.go:68] Setting up /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680 for IP: 192.168.49.2
	I0923 12:14:44.884491  257312 certs.go:194] generating shared ca certs ...
	I0923 12:14:44.884514  257312 certs.go:226] acquiring lock for ca certs: {Name:mk3ce6e30454b0b4a28ad7d1f5d2a2bc1cef5813 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:44.884836  257312 certs.go:240] generating "minikubeCA" ca cert: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/ca.key
	I0923 12:14:45.090781  257312 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/ca.crt ...
	I0923 12:14:45.090818  257312 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/ca.crt: {Name:mkc0a3bb65f8af0327ea217c6913ab256c9c6313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:45.091287  257312 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/ca.key ...
	I0923 12:14:45.091336  257312 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/ca.key: {Name:mka21ead40ee49af71077f01d6481f505ca61e5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:45.091626  257312 certs.go:240] generating "proxyClientCA" ca cert: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/proxy-client-ca.key
	I0923 12:14:45.233725  257312 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/proxy-client-ca.crt ...
	I0923 12:14:45.233780  257312 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/proxy-client-ca.crt: {Name:mkb0177794bde938695711f7805f647a0544fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:45.234234  257312 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/proxy-client-ca.key ...
	I0923 12:14:45.234262  257312 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/proxy-client-ca.key: {Name:mkc564b157e23101604acdf67c09d144c3da5187 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:45.234623  257312 certs.go:256] generating profile certs ...
	I0923 12:14:45.234737  257312 certs.go:363] generating signed profile cert for "minikube-user": /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.key
	I0923 12:14:45.234790  257312 crypto.go:68] Generating cert /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt with IP's: []
	I0923 12:14:45.396020  257312 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt ...
	I0923 12:14:45.396059  257312 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt: {Name:mk625667f74ef7a0b2fe8df1b92d4fe50b87ceb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:45.396562  257312 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.key ...
	I0923 12:14:45.396590  257312 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.key: {Name:mkf7e6305892ebdc6e7e3f8e7b93d65fd978cafd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:45.396894  257312 certs.go:363] generating signed profile cert for "minikube": /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/apiserver.key.00a98a82
	I0923 12:14:45.396936  257312 crypto.go:68] Generating cert /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/apiserver.crt.00a98a82 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0923 12:14:45.596532  257312 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/apiserver.crt.00a98a82 ...
	I0923 12:14:45.596587  257312 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/apiserver.crt.00a98a82: {Name:mk9fb8fc2af1fea2376148071e81c78cce6f3b62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:45.597003  257312 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/apiserver.key.00a98a82 ...
	I0923 12:14:45.597031  257312 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/apiserver.key.00a98a82: {Name:mk4ee077d21d8b07388ef9817c67768b667aabcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:45.597377  257312 certs.go:381] copying /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/apiserver.crt.00a98a82 -> /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/apiserver.crt
	I0923 12:14:45.597550  257312 certs.go:385] copying /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/apiserver.key.00a98a82 -> /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/apiserver.key
	I0923 12:14:45.597656  257312 certs.go:363] generating signed profile cert for "aggregator": /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/proxy-client.key
	I0923 12:14:45.597698  257312 crypto.go:68] Generating cert /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/proxy-client.crt with IP's: []
	I0923 12:14:45.822354  257312 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/proxy-client.crt ...
	I0923 12:14:45.822399  257312 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/proxy-client.crt: {Name:mk4af71c6fab7fcd5ea91472314d2f717f3b4947 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:45.822885  257312 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/proxy-client.key ...
	I0923 12:14:45.822917  257312 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/proxy-client.key: {Name:mkb65ba7b83005d8601afbf02b599e1e9e0f5302 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:45.823558  257312 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 12:14:45.823642  257312 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/certs/ca.pem (1119 bytes)
	I0923 12:14:45.823702  257312 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/certs/cert.pem (1164 bytes)
	I0923 12:14:45.823766  257312 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/certs/key.pem (1679 bytes)
	I0923 12:14:45.824845  257312 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:14:45.911855  257312 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 12:14:45.975044  257312 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:14:46.024114  257312 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 12:14:46.062848  257312 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 12:14:46.101673  257312 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 12:14:46.139137  257312 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:14:46.176301  257312 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:14:46.214025  257312 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:14:46.254939  257312 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 12:14:46.284273  257312 ssh_runner.go:195] Run: openssl version
	I0923 12:14:46.293051  257312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:14:46.308730  257312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:14:46.314792  257312 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:14:46.314914  257312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:14:46.324600  257312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:14:46.339261  257312 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:14:46.344739  257312 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 12:14:46.344855  257312 kubeadm.go:392] StartCluster: {Name:addons-785680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-785680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:14:46.345088  257312 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 12:14:46.376996  257312 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 12:14:46.391539  257312 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 12:14:46.406020  257312 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0923 12:14:46.406137  257312 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 12:14:46.420182  257312 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 12:14:46.420280  257312 kubeadm.go:157] found existing configuration files:
	
	I0923 12:14:46.420391  257312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 12:14:46.434586  257312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 12:14:46.434721  257312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 12:14:46.448267  257312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 12:14:46.462475  257312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 12:14:46.462615  257312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 12:14:46.476062  257312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 12:14:46.489988  257312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 12:14:46.490102  257312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 12:14:46.503473  257312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 12:14:46.516968  257312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 12:14:46.517094  257312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 12:14:46.530657  257312 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0923 12:14:46.591118  257312 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 12:14:46.591236  257312 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 12:14:46.770573  257312 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 12:14:46.770757  257312 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 12:14:46.770913  257312 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 12:14:46.789918  257312 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 12:14:46.794163  257312 out.go:235]   - Generating certificates and keys ...
	I0923 12:14:46.794301  257312 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 12:14:46.794431  257312 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 12:14:46.886698  257312 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 12:14:47.072976  257312 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 12:14:47.463629  257312 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 12:14:47.679271  257312 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 12:14:47.860556  257312 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 12:14:47.861009  257312 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-785680 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 12:14:48.009007  257312 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 12:14:48.009634  257312 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-785680 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 12:14:48.170706  257312 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 12:14:48.510404  257312 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 12:14:48.675820  257312 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 12:14:48.676135  257312 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 12:14:48.802847  257312 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 12:14:49.181276  257312 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 12:14:49.571352  257312 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 12:14:49.972453  257312 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 12:14:50.210672  257312 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 12:14:50.211651  257312 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 12:14:50.214744  257312 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 12:14:50.217870  257312 out.go:235]   - Booting up control plane ...
	I0923 12:14:50.218014  257312 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 12:14:50.220323  257312 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 12:14:50.221458  257312 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 12:14:50.237903  257312 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 12:14:50.246483  257312 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 12:14:50.246593  257312 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 12:14:50.396255  257312 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 12:14:50.396748  257312 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 12:14:50.897815  257312 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.528596ms
	I0923 12:14:50.897961  257312 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 12:14:57.900827  257312 kubeadm.go:310] [api-check] The API server is healthy after 7.002879258s
	I0923 12:14:57.920375  257312 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 12:14:57.937093  257312 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 12:14:57.965154  257312 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 12:14:57.965610  257312 kubeadm.go:310] [mark-control-plane] Marking the node addons-785680 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 12:14:57.978658  257312 kubeadm.go:310] [bootstrap-token] Using token: vqefj8.bmveuwqbxobjpqog
	I0923 12:14:57.981881  257312 out.go:235]   - Configuring RBAC rules ...
	I0923 12:14:57.982083  257312 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 12:14:57.990949  257312 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 12:14:58.003067  257312 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 12:14:58.007415  257312 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 12:14:58.011999  257312 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 12:14:58.016325  257312 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 12:14:58.310693  257312 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 12:14:58.838275  257312 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 12:14:59.309490  257312 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 12:14:59.311058  257312 kubeadm.go:310] 
	I0923 12:14:59.311179  257312 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 12:14:59.311200  257312 kubeadm.go:310] 
	I0923 12:14:59.311397  257312 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 12:14:59.311416  257312 kubeadm.go:310] 
	I0923 12:14:59.311466  257312 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 12:14:59.311600  257312 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 12:14:59.311707  257312 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 12:14:59.311734  257312 kubeadm.go:310] 
	I0923 12:14:59.311858  257312 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 12:14:59.311874  257312 kubeadm.go:310] 
	I0923 12:14:59.311971  257312 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 12:14:59.311986  257312 kubeadm.go:310] 
	I0923 12:14:59.312105  257312 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 12:14:59.312261  257312 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 12:14:59.312421  257312 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 12:14:59.312438  257312 kubeadm.go:310] 
	I0923 12:14:59.312616  257312 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 12:14:59.312774  257312 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 12:14:59.312790  257312 kubeadm.go:310] 
	I0923 12:14:59.312964  257312 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vqefj8.bmveuwqbxobjpqog \
	I0923 12:14:59.313193  257312 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ea154aea8bdf3b7817326e26ea8040a775c69b3dae2f6cbd8ca934ab0facc08 \
	I0923 12:14:59.313242  257312 kubeadm.go:310] 	--control-plane 
	I0923 12:14:59.313257  257312 kubeadm.go:310] 
	I0923 12:14:59.313471  257312 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 12:14:59.313487  257312 kubeadm.go:310] 
	I0923 12:14:59.313652  257312 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vqefj8.bmveuwqbxobjpqog \
	I0923 12:14:59.314053  257312 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ea154aea8bdf3b7817326e26ea8040a775c69b3dae2f6cbd8ca934ab0facc08 
	I0923 12:14:59.319264  257312 kubeadm.go:310] W0923 12:14:46.587337    1681 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:14:59.319875  257312 kubeadm.go:310] W0923 12:14:46.588408    1681 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:14:59.320127  257312 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 12:14:59.320167  257312 cni.go:84] Creating CNI manager for ""
	I0923 12:14:59.320192  257312 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 12:14:59.324131  257312 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 12:14:59.327045  257312 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 12:14:59.342565  257312 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 12:14:59.374585  257312 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 12:14:59.374836  257312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:14:59.374966  257312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-785680 minikube.k8s.io/updated_at=2024_09_23T12_14_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=addons-785680 minikube.k8s.io/primary=true
	I0923 12:14:59.585819  257312 ops.go:34] apiserver oom_adj: -16
	I0923 12:14:59.585906  257312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:15:00.086619  257312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:15:00.586229  257312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:15:01.086818  257312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:15:01.586266  257312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:15:02.086622  257312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:15:02.586378  257312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:15:03.086208  257312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:15:03.586278  257312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:15:03.716376  257312 kubeadm.go:1113] duration metric: took 4.341620034s to wait for elevateKubeSystemPrivileges
	I0923 12:15:03.716417  257312 kubeadm.go:394] duration metric: took 17.37158588s to StartCluster
	I0923 12:15:03.716445  257312 settings.go:142] acquiring lock: {Name:mkf4da057632d32b5373b905e5f7bc6f0da7ec2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:15:03.716834  257312 settings.go:150] Updating kubeconfig:  /home/g528047478195_compute/minikube-integration/19690-251237/kubeconfig
	I0923 12:15:03.717567  257312 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19690-251237/kubeconfig: {Name:mkfa712fe64bf48e8abc1ff31339db0422c68c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:15:03.718070  257312 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:15:03.718265  257312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 12:15:03.718654  257312 config.go:182] Loaded profile config "addons-785680": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:15:03.718580  257312 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 12:15:03.718692  257312 addons.go:69] Setting cloud-spanner=true in profile "addons-785680"
	I0923 12:15:03.718703  257312 addons.go:69] Setting yakd=true in profile "addons-785680"
	I0923 12:15:03.718712  257312 addons.go:234] Setting addon cloud-spanner=true in "addons-785680"
	I0923 12:15:03.718720  257312 addons.go:234] Setting addon yakd=true in "addons-785680"
	I0923 12:15:03.718749  257312 host.go:66] Checking if "addons-785680" exists ...
	I0923 12:15:03.718749  257312 host.go:66] Checking if "addons-785680" exists ...
	I0923 12:15:03.719621  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Status}}
	I0923 12:15:03.719663  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Status}}
	I0923 12:15:03.721162  257312 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-785680"
	I0923 12:15:03.721256  257312 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-785680"
	I0923 12:15:03.721330  257312 host.go:66] Checking if "addons-785680" exists ...
	I0923 12:15:03.722439  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Status}}
	I0923 12:15:03.723775  257312 out.go:177] * Verifying Kubernetes components...
	I0923 12:15:03.724852  257312 addons.go:69] Setting metrics-server=true in profile "addons-785680"
	I0923 12:15:03.724881  257312 addons.go:234] Setting addon metrics-server=true in "addons-785680"
	I0923 12:15:03.724935  257312 host.go:66] Checking if "addons-785680" exists ...
	I0923 12:15:03.725786  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Status}}
	I0923 12:15:03.728284  257312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:15:03.728386  257312 addons.go:69] Setting default-storageclass=true in profile "addons-785680"
	I0923 12:15:03.728412  257312 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-785680"
	I0923 12:15:03.729076  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Status}}
	I0923 12:15:03.737604  257312 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-785680"
	I0923 12:15:03.737648  257312 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-785680"
	I0923 12:15:03.737739  257312 host.go:66] Checking if "addons-785680" exists ...
	I0923 12:15:03.738862  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Status}}
	I0923 12:15:03.742451  257312 addons.go:69] Setting gcp-auth=true in profile "addons-785680"
	I0923 12:15:03.742496  257312 mustload.go:65] Loading cluster: addons-785680
	I0923 12:15:03.742869  257312 config.go:182] Loaded profile config "addons-785680": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:15:03.743539  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Status}}
	I0923 12:15:03.751722  257312 addons.go:69] Setting registry=true in profile "addons-785680"
	I0923 12:15:03.751762  257312 addons.go:234] Setting addon registry=true in "addons-785680"
	I0923 12:15:03.751823  257312 host.go:66] Checking if "addons-785680" exists ...
	I0923 12:15:03.752696  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Status}}
	I0923 12:15:03.762096  257312 addons.go:69] Setting ingress=true in profile "addons-785680"
	I0923 12:15:03.762137  257312 addons.go:234] Setting addon ingress=true in "addons-785680"
	I0923 12:15:03.762208  257312 host.go:66] Checking if "addons-785680" exists ...
	I0923 12:15:03.763419  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Status}}
	I0923 12:15:03.772406  257312 addons.go:69] Setting storage-provisioner=true in profile "addons-785680"
	I0923 12:15:03.772456  257312 addons.go:234] Setting addon storage-provisioner=true in "addons-785680"
	I0923 12:15:03.772507  257312 host.go:66] Checking if "addons-785680" exists ...
	I0923 12:15:03.773967  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Status}}
	I0923 12:15:03.780663  257312 addons.go:69] Setting ingress-dns=true in profile "addons-785680"
	I0923 12:15:03.780708  257312 addons.go:234] Setting addon ingress-dns=true in "addons-785680"
	I0923 12:15:03.780792  257312 host.go:66] Checking if "addons-785680" exists ...
	I0923 12:15:03.781765  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Status}}
	I0923 12:15:03.787539  257312 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-785680"
	I0923 12:15:03.787577  257312 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-785680"
	I0923 12:15:03.788418  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Status}}
	I0923 12:15:03.793381  257312 addons.go:69] Setting inspektor-gadget=true in profile "addons-785680"
	I0923 12:15:03.793417  257312 addons.go:234] Setting addon inspektor-gadget=true in "addons-785680"
	I0923 12:15:03.793464  257312 host.go:66] Checking if "addons-785680" exists ...
	I0923 12:15:03.794243  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Status}}
	I0923 12:15:03.820400  257312 addons.go:69] Setting volcano=true in profile "addons-785680"
	I0923 12:15:03.820456  257312 addons.go:234] Setting addon volcano=true in "addons-785680"
	I0923 12:15:03.820508  257312 host.go:66] Checking if "addons-785680" exists ...
	I0923 12:15:03.821550  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Status}}
	I0923 12:15:03.849559  257312 addons.go:69] Setting volumesnapshots=true in profile "addons-785680"
	I0923 12:15:03.849611  257312 addons.go:234] Setting addon volumesnapshots=true in "addons-785680"
	I0923 12:15:03.849664  257312 host.go:66] Checking if "addons-785680" exists ...
	I0923 12:15:03.850627  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Status}}
	I0923 12:15:04.219932  257312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 12:15:04.271568  257312 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 12:15:04.276439  257312 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 12:15:04.276571  257312 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 12:15:04.276769  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:15:04.348608  257312 host.go:66] Checking if "addons-785680" exists ...
	I0923 12:15:04.353784  257312 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 12:15:04.358444  257312 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 12:15:04.358543  257312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 12:15:04.358716  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:15:04.468407  257312 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 12:15:04.472660  257312 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 12:15:04.472697  257312 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 12:15:04.472821  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:15:04.485723  257312 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 12:15:04.490047  257312 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 12:15:04.490100  257312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 12:15:04.490320  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:15:04.537365  257312 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 12:15:04.546854  257312 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 12:15:04.555507  257312 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 12:15:04.555540  257312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 12:15:04.555654  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:15:04.576832  257312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:15:04.586100  257312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 12:15:04.591520  257312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 12:15:04.595766  257312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 12:15:04.602548  257312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 12:15:04.612869  257312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 12:15:04.620508  257312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 12:15:04.626117  257312 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-785680"
	I0923 12:15:04.626173  257312 host.go:66] Checking if "addons-785680" exists ...
	I0923 12:15:04.627040  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Status}}
	I0923 12:15:04.640693  257312 addons.go:234] Setting addon default-storageclass=true in "addons-785680"
	I0923 12:15:04.640831  257312 host.go:66] Checking if "addons-785680" exists ...
	I0923 12:15:04.647075  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Status}}
	I0923 12:15:04.695596  257312 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 12:15:04.698397  257312 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 12:15:04.698422  257312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 12:15:04.698527  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:15:04.715055  257312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa Username:docker}
	I0923 12:15:04.716523  257312 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 12:15:04.719648  257312 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 12:15:04.722539  257312 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0923 12:15:04.722750  257312 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 12:15:04.723003  257312 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 12:15:04.723103  257312 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 12:15:04.723191  257312 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 12:15:04.723339  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:15:04.728434  257312 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 12:15:04.728582  257312 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0923 12:15:04.728859  257312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa Username:docker}
	I0923 12:15:04.731271  257312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 12:15:04.731298  257312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 12:15:04.731413  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:15:04.750251  257312 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0923 12:15:04.755843  257312 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:15:04.755873  257312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 12:15:04.755982  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:15:04.757066  257312 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 12:15:04.757112  257312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0923 12:15:04.757220  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:15:04.800102  257312 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 12:15:04.804950  257312 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 12:15:04.810265  257312 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 12:15:04.810294  257312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 12:15:04.810441  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:15:04.813973  257312 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 12:15:04.819877  257312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 12:15:04.819952  257312 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 12:15:04.820097  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:15:04.937053  257312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa Username:docker}
	I0923 12:15:05.012498  257312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa Username:docker}
	I0923 12:15:05.025133  257312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa Username:docker}
	I0923 12:15:05.100940  257312 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 12:15:05.100969  257312 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 12:15:05.101121  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:15:05.144596  257312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa Username:docker}
	I0923 12:15:05.149855  257312 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 12:15:05.154903  257312 out.go:177]   - Using image docker.io/busybox:stable
	I0923 12:15:05.157961  257312 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 12:15:05.157986  257312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 12:15:05.158118  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:15:05.219674  257312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa Username:docker}
	I0923 12:15:05.237411  257312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa Username:docker}
	I0923 12:15:05.264540  257312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa Username:docker}
	I0923 12:15:05.276890  257312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa Username:docker}
	I0923 12:15:05.287509  257312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa Username:docker}
	I0923 12:15:05.287749  257312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa Username:docker}
	I0923 12:15:05.304490  257312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa Username:docker}
	W0923 12:15:05.318562  257312 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 12:15:05.318604  257312 retry.go:31] will retry after 342.647361ms: ssh: handshake failed: EOF
	I0923 12:15:05.342279  257312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa Username:docker}
	I0923 12:15:05.509632  257312 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 12:15:05.509751  257312 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 12:15:05.611228  257312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 12:15:06.006222  257312 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 12:15:06.006276  257312 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 12:15:06.175196  257312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 12:15:06.415509  257312 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 12:15:06.415543  257312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 12:15:06.516708  257312 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 12:15:06.516747  257312 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 12:15:06.541643  257312 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 12:15:06.541677  257312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 12:15:06.662589  257312 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 12:15:06.662622  257312 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 12:15:06.688722  257312 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 12:15:06.688752  257312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 12:15:06.715491  257312 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 12:15:06.715529  257312 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 12:15:06.736323  257312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 12:15:06.754393  257312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 12:15:06.791102  257312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:15:06.837654  257312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 12:15:07.025242  257312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 12:15:07.077578  257312 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 12:15:07.077613  257312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 12:15:07.104367  257312 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 12:15:07.104410  257312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 12:15:07.125414  257312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 12:15:07.133725  257312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 12:15:07.133771  257312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 12:15:07.196634  257312 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 12:15:07.196693  257312 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 12:15:07.229082  257312 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 12:15:07.229118  257312 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 12:15:07.279477  257312 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 12:15:07.279514  257312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 12:15:07.553471  257312 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 12:15:07.553506  257312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 12:15:07.575402  257312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 12:15:07.575438  257312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 12:15:07.582878  257312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 12:15:07.602961  257312 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 12:15:07.603001  257312 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 12:15:07.615892  257312 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 12:15:07.615921  257312 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 12:15:07.777843  257312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 12:15:07.825754  257312 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.605762364s)
	I0923 12:15:07.825797  257312 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0923 12:15:07.827610  257312 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.2507413s)
	I0923 12:15:07.828899  257312 node_ready.go:35] waiting up to 6m0s for node "addons-785680" to be "Ready" ...
	I0923 12:15:07.879322  257312 node_ready.go:49] node "addons-785680" has status "Ready":"True"
	I0923 12:15:07.879357  257312 node_ready.go:38] duration metric: took 50.424123ms for node "addons-785680" to be "Ready" ...
	I0923 12:15:07.879371  257312 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:15:08.009838  257312 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 12:15:08.009871  257312 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 12:15:08.035779  257312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 12:15:08.035815  257312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 12:15:08.048054  257312 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 12:15:08.048096  257312 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 12:15:08.080255  257312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 12:15:08.121540  257312 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g7pd8" in "kube-system" namespace to be "Ready" ...
	I0923 12:15:08.387473  257312 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 12:15:08.387506  257312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 12:15:08.427988  257312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 12:15:08.428024  257312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 12:15:08.496192  257312 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 12:15:08.496225  257312 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 12:15:08.575733  257312 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-785680" context rescaled to 1 replicas
	I0923 12:15:08.891876  257312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 12:15:08.985759  257312 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 12:15:08.985804  257312 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 12:15:09.036188  257312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 12:15:09.036214  257312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 12:15:09.328495  257312 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 12:15:09.328523  257312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 12:15:09.411594  257312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 12:15:09.411636  257312 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 12:15:09.716564  257312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 12:15:09.957456  257312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 12:15:09.957491  257312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 12:15:10.436200  257312 pod_ready.go:103] pod "coredns-7c65d6cfc9-g7pd8" in "kube-system" namespace has status "Ready":"False"
	I0923 12:15:10.658409  257312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 12:15:10.658440  257312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 12:15:11.206486  257312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 12:15:11.206517  257312 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 12:15:11.795420  257312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 12:15:12.676360  257312 pod_ready.go:103] pod "coredns-7c65d6cfc9-g7pd8" in "kube-system" namespace has status "Ready":"False"
	I0923 12:15:13.363037  257312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.751651885s)
	I0923 12:15:13.363116  257312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.187889379s)
	I0923 12:15:15.157746  257312 pod_ready.go:98] pod "coredns-7c65d6cfc9-g7pd8" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 12:15:11 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 12:15:04 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 12:15:04 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 12:15:04 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 12:15:04 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.3 PodI
Ps:[{IP:10.244.0.3}] StartTime:2024-09-23 12:15:04 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2024-09-23 12:15:10 +0000 UTC,FinishedAt:2024-09-23 12:15:11 +0000 UTC,ContainerID:docker://3e93486e51c3d65d47d694dc5944ac530e427a01ef55681fbba55717bc285f77,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://3e93486e51c3d65d47d694dc5944ac530e427a01ef55681fbba55717bc285f77 Started:0xc01982234c AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc01980a500} {Name:kube-api-access-gx9ds MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc01980a510}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0923 12:15:15.157792  257312 pod_ready.go:82] duration metric: took 7.036135498s for pod "coredns-7c65d6cfc9-g7pd8" in "kube-system" namespace to be "Ready" ...
	E0923 12:15:15.157810  257312 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-g7pd8" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 12:15:11 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 12:15:04 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 12:15:04 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 12:15:04 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 12:15:04 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168
.49.2}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-23 12:15:04 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2024-09-23 12:15:10 +0000 UTC,FinishedAt:2024-09-23 12:15:11 +0000 UTC,ContainerID:docker://3e93486e51c3d65d47d694dc5944ac530e427a01ef55681fbba55717bc285f77,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://3e93486e51c3d65d47d694dc5944ac530e427a01ef55681fbba55717bc285f77 Started:0xc01982234c AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc01980a500} {Name:kube-api-access-gx9ds MountPath:/var/run/secrets/kubernetes.io/serviceaccoun
t ReadOnly:true RecursiveReadOnly:0xc01980a510}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0923 12:15:15.157832  257312 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h6nr5" in "kube-system" namespace to be "Ready" ...
	I0923 12:15:15.805033  257312 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 12:15:15.805170  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:15:15.894060  257312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa Username:docker}
	I0923 12:15:16.217772  257312 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 12:15:16.286434  257312 addons.go:234] Setting addon gcp-auth=true in "addons-785680"
	I0923 12:15:16.286624  257312 host.go:66] Checking if "addons-785680" exists ...
	I0923 12:15:16.287786  257312 cli_runner.go:164] Run: docker container inspect addons-785680 --format={{.State.Status}}
	I0923 12:15:16.329893  257312 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 12:15:16.330063  257312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-785680
	I0923 12:15:16.383460  257312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/addons-785680/id_rsa Username:docker}
	I0923 12:15:17.290784  257312 pod_ready.go:103] pod "coredns-7c65d6cfc9-h6nr5" in "kube-system" namespace has status "Ready":"False"
	I0923 12:15:18.519347  257312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.782954228s)
	I0923 12:15:19.486789  257312 pod_ready.go:103] pod "coredns-7c65d6cfc9-h6nr5" in "kube-system" namespace has status "Ready":"False"
	I0923 12:15:22.102049  257312 pod_ready.go:103] pod "coredns-7c65d6cfc9-h6nr5" in "kube-system" namespace has status "Ready":"False"
	I0923 12:15:24.694793  257312 pod_ready.go:103] pod "coredns-7c65d6cfc9-h6nr5" in "kube-system" namespace has status "Ready":"False"
	I0923 12:15:27.169654  257312 pod_ready.go:103] pod "coredns-7c65d6cfc9-h6nr5" in "kube-system" namespace has status "Ready":"False"
	I0923 12:15:29.539823  257312 pod_ready.go:103] pod "coredns-7c65d6cfc9-h6nr5" in "kube-system" namespace has status "Ready":"False"
	I0923 12:15:32.316766  257312 pod_ready.go:103] pod "coredns-7c65d6cfc9-h6nr5" in "kube-system" namespace has status "Ready":"False"
	I0923 12:15:32.578425  257312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (25.823949892s)
	I0923 12:15:32.578732  257312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (25.787587461s)
	I0923 12:15:32.578904  257312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (25.741219412s)
	I0923 12:15:32.578994  257312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (25.553722318s)
	I0923 12:15:32.579785  257312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (24.996869356s)
	I0923 12:15:32.579907  257312 addons.go:475] Verifying addon registry=true in "addons-785680"
	I0923 12:15:32.580256  257312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (24.802373578s)
	I0923 12:15:32.580803  257312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (22.864203455s)
	I0923 12:15:32.580522  257312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (24.500224983s)
	I0923 12:15:32.581070  257312 addons.go:475] Verifying addon metrics-server=true in "addons-785680"
	I0923 12:15:32.580699  257312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (23.688769279s)
	W0923 12:15:32.581117  257312 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 12:15:32.581139  257312 retry.go:31] will retry after 206.63549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 12:15:32.580036  257312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (25.454271965s)
	I0923 12:15:32.581165  257312 addons.go:475] Verifying addon ingress=true in "addons-785680"
	I0923 12:15:32.585068  257312 out.go:177] * Verifying registry addon...
	I0923 12:15:32.585113  257312 out.go:177] * Verifying ingress addon...
	I0923 12:15:32.585210  257312 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-785680 service yakd-dashboard -n yakd-dashboard
	
	I0923 12:15:32.590544  257312 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 12:15:32.590541  257312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 12:15:32.710816  257312 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 12:15:32.710936  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:32.712833  257312 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 12:15:32.712922  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:32.788223  257312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 12:15:33.414684  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:33.419395  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:33.731964  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:33.734146  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:34.416657  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:34.417740  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:34.466513  257312 pod_ready.go:103] pod "coredns-7c65d6cfc9-h6nr5" in "kube-system" namespace has status "Ready":"False"
	I0923 12:15:35.152530  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:35.152975  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:35.415469  257312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (23.61994667s)
	I0923 12:15:35.415659  257312 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-785680"
	I0923 12:15:35.416574  257312 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (19.086598636s)
	I0923 12:15:35.419921  257312 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 12:15:35.420159  257312 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 12:15:35.425339  257312 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 12:15:35.425934  257312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 12:15:35.429263  257312 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 12:15:35.429389  257312 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 12:15:35.455601  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:35.457714  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:35.534631  257312 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 12:15:35.534745  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:35.545383  257312 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 12:15:35.545422  257312 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 12:15:35.601070  257312 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 12:15:35.601098  257312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 12:15:35.658215  257312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 12:15:36.251720  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:36.253413  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:36.255456  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:36.459882  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:36.461879  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:36.492924  257312 pod_ready.go:103] pod "coredns-7c65d6cfc9-h6nr5" in "kube-system" namespace has status "Ready":"False"
	I0923 12:15:36.503457  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:36.660428  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:36.663046  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:36.969982  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:37.266487  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:37.294434  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:37.643165  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:37.686071  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:37.748526  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:37.875751  257312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.087381629s)
	I0923 12:15:37.938301  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:38.086202  257312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.427932874s)
	I0923 12:15:38.091800  257312 addons.go:475] Verifying addon gcp-auth=true in "addons-785680"
	I0923 12:15:38.095054  257312 out.go:177] * Verifying gcp-auth addon...
	I0923 12:15:38.098887  257312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 12:15:38.120161  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:38.121013  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:38.220923  257312 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 12:15:38.436844  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:38.603492  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:38.608118  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:38.673779  257312 pod_ready.go:103] pod "coredns-7c65d6cfc9-h6nr5" in "kube-system" namespace has status "Ready":"False"
	I0923 12:15:38.933077  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:39.137282  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:39.156028  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:39.438377  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:39.607017  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:39.612230  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:39.950769  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:40.120486  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:40.121508  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:40.453401  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:40.607496  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:40.610220  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:40.937433  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:41.110467  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:41.116717  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:41.180282  257312 pod_ready.go:103] pod "coredns-7c65d6cfc9-h6nr5" in "kube-system" namespace has status "Ready":"False"
	I0923 12:15:41.436889  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:41.600337  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:41.602832  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:41.939693  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:42.101153  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:42.103748  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:42.434869  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:42.598564  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:42.607564  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:42.935525  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:43.101378  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:43.101836  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:43.438434  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:43.601636  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:43.604170  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:43.673228  257312 pod_ready.go:103] pod "coredns-7c65d6cfc9-h6nr5" in "kube-system" namespace has status "Ready":"False"
	I0923 12:15:43.948789  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:44.103975  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:44.105152  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:44.171550  257312 pod_ready.go:93] pod "coredns-7c65d6cfc9-h6nr5" in "kube-system" namespace has status "Ready":"True"
	I0923 12:15:44.171669  257312 pod_ready.go:82] duration metric: took 29.013819448s for pod "coredns-7c65d6cfc9-h6nr5" in "kube-system" namespace to be "Ready" ...
	I0923 12:15:44.171749  257312 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-785680" in "kube-system" namespace to be "Ready" ...
	I0923 12:15:44.189361  257312 pod_ready.go:93] pod "etcd-addons-785680" in "kube-system" namespace has status "Ready":"True"
	I0923 12:15:44.190519  257312 pod_ready.go:82] duration metric: took 18.718304ms for pod "etcd-addons-785680" in "kube-system" namespace to be "Ready" ...
	I0923 12:15:44.190617  257312 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-785680" in "kube-system" namespace to be "Ready" ...
	I0923 12:15:44.201614  257312 pod_ready.go:93] pod "kube-apiserver-addons-785680" in "kube-system" namespace has status "Ready":"True"
	I0923 12:15:44.201644  257312 pod_ready.go:82] duration metric: took 10.969101ms for pod "kube-apiserver-addons-785680" in "kube-system" namespace to be "Ready" ...
	I0923 12:15:44.201660  257312 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-785680" in "kube-system" namespace to be "Ready" ...
	I0923 12:15:44.209487  257312 pod_ready.go:93] pod "kube-controller-manager-addons-785680" in "kube-system" namespace has status "Ready":"True"
	I0923 12:15:44.209515  257312 pod_ready.go:82] duration metric: took 7.842048ms for pod "kube-controller-manager-addons-785680" in "kube-system" namespace to be "Ready" ...
	I0923 12:15:44.209537  257312 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bk2ss" in "kube-system" namespace to be "Ready" ...
	I0923 12:15:44.217447  257312 pod_ready.go:93] pod "kube-proxy-bk2ss" in "kube-system" namespace has status "Ready":"True"
	I0923 12:15:44.217479  257312 pod_ready.go:82] duration metric: took 7.929458ms for pod "kube-proxy-bk2ss" in "kube-system" namespace to be "Ready" ...
	I0923 12:15:44.217494  257312 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-785680" in "kube-system" namespace to be "Ready" ...
	I0923 12:15:44.436787  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:44.566324  257312 pod_ready.go:93] pod "kube-scheduler-addons-785680" in "kube-system" namespace has status "Ready":"True"
	I0923 12:15:44.566356  257312 pod_ready.go:82] duration metric: took 348.850633ms for pod "kube-scheduler-addons-785680" in "kube-system" namespace to be "Ready" ...
	I0923 12:15:44.566383  257312 pod_ready.go:39] duration metric: took 36.686981284s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:15:44.566482  257312 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:15:44.566612  257312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:15:44.613445  257312 api_server.go:72] duration metric: took 40.895329367s to wait for apiserver process to appear ...
	I0923 12:15:44.613501  257312 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:15:44.613532  257312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 12:15:44.625684  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:44.636039  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:44.640202  257312 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0923 12:15:44.643165  257312 api_server.go:141] control plane version: v1.31.1
	I0923 12:15:44.643201  257312 api_server.go:131] duration metric: took 29.689313ms to wait for apiserver health ...
	I0923 12:15:44.643219  257312 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 12:15:44.777617  257312 system_pods.go:59] 17 kube-system pods found
	I0923 12:15:44.777680  257312 system_pods.go:61] "coredns-7c65d6cfc9-h6nr5" [8d834293-ae57-4f98-8300-4bfd4dc7a65b] Running
	I0923 12:15:44.777696  257312 system_pods.go:61] "csi-hostpath-attacher-0" [ea168197-e84b-411f-8a0b-14bce2237be9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 12:15:44.777721  257312 system_pods.go:61] "csi-hostpath-resizer-0" [e955aca9-ace8-4fe0-81ba-f58d43ea8180] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 12:15:44.777738  257312 system_pods.go:61] "csi-hostpathplugin-lwkth" [59c09992-d71c-47c9-86d8-6a23ef2ee921] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 12:15:44.777753  257312 system_pods.go:61] "etcd-addons-785680" [44b5acb4-5ae2-4389-84b8-c6dea3f53e46] Running
	I0923 12:15:44.777764  257312 system_pods.go:61] "kube-apiserver-addons-785680" [012a2bb1-fcc4-4ebf-985d-bd14b5d6f553] Running
	I0923 12:15:44.777781  257312 system_pods.go:61] "kube-controller-manager-addons-785680" [aeff34ca-d427-4ba8-9b4f-6ad476504a2f] Running
	I0923 12:15:44.777802  257312 system_pods.go:61] "kube-ingress-dns-minikube" [c8e0465f-94d7-41b1-acac-037f9df14f57] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0923 12:15:44.777811  257312 system_pods.go:61] "kube-proxy-bk2ss" [deb10bde-1ce4-49a1-a6cb-ad0168a2ccee] Running
	I0923 12:15:44.777834  257312 system_pods.go:61] "kube-scheduler-addons-785680" [cbd95ce2-977c-491e-9922-367ade7228ac] Running
	I0923 12:15:44.777851  257312 system_pods.go:61] "metrics-server-84c5f94fbc-2gg67" [60fbd9ca-e159-4eb2-b768-53e1e220cc1f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 12:15:44.777860  257312 system_pods.go:61] "nvidia-device-plugin-daemonset-2st59" [cba3dfd0-b8a8-46d8-9a28-88a2f37b0d2d] Running
	I0923 12:15:44.777872  257312 system_pods.go:61] "registry-66c9cd494c-gs28r" [a4004cb2-7560-45d6-957e-58b28943f86e] Running
	I0923 12:15:44.777883  257312 system_pods.go:61] "registry-proxy-64jpr" [54e0a75a-3fc8-4445-922b-6a5f4489144e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 12:15:44.777901  257312 system_pods.go:61] "snapshot-controller-56fcc65765-6s2d8" [914a9c90-db48-409a-85be-9e7b9a834552] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 12:15:44.777919  257312 system_pods.go:61] "snapshot-controller-56fcc65765-gf4sq" [cca8fb58-87b6-435a-a5b9-b31b646f345c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 12:15:44.777933  257312 system_pods.go:61] "storage-provisioner" [a449dc61-74e2-4059-b06e-90156c2a8a7b] Running
	I0923 12:15:44.777946  257312 system_pods.go:74] duration metric: took 134.716083ms to wait for pod list to return data ...
	I0923 12:15:44.777960  257312 default_sa.go:34] waiting for default service account to be created ...
	I0923 12:15:44.932559  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:44.964510  257312 default_sa.go:45] found service account: "default"
	I0923 12:15:44.964639  257312 default_sa.go:55] duration metric: took 186.661971ms for default service account to be created ...
	I0923 12:15:44.964706  257312 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 12:15:45.098720  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:45.098853  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:45.188807  257312 system_pods.go:86] 17 kube-system pods found
	I0923 12:15:45.188937  257312 system_pods.go:89] "coredns-7c65d6cfc9-h6nr5" [8d834293-ae57-4f98-8300-4bfd4dc7a65b] Running
	I0923 12:15:45.189006  257312 system_pods.go:89] "csi-hostpath-attacher-0" [ea168197-e84b-411f-8a0b-14bce2237be9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 12:15:45.189061  257312 system_pods.go:89] "csi-hostpath-resizer-0" [e955aca9-ace8-4fe0-81ba-f58d43ea8180] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 12:15:45.189157  257312 system_pods.go:89] "csi-hostpathplugin-lwkth" [59c09992-d71c-47c9-86d8-6a23ef2ee921] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 12:15:45.189201  257312 system_pods.go:89] "etcd-addons-785680" [44b5acb4-5ae2-4389-84b8-c6dea3f53e46] Running
	I0923 12:15:45.189234  257312 system_pods.go:89] "kube-apiserver-addons-785680" [012a2bb1-fcc4-4ebf-985d-bd14b5d6f553] Running
	I0923 12:15:45.189287  257312 system_pods.go:89] "kube-controller-manager-addons-785680" [aeff34ca-d427-4ba8-9b4f-6ad476504a2f] Running
	I0923 12:15:45.189351  257312 system_pods.go:89] "kube-ingress-dns-minikube" [c8e0465f-94d7-41b1-acac-037f9df14f57] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0923 12:15:45.189383  257312 system_pods.go:89] "kube-proxy-bk2ss" [deb10bde-1ce4-49a1-a6cb-ad0168a2ccee] Running
	I0923 12:15:45.189432  257312 system_pods.go:89] "kube-scheduler-addons-785680" [cbd95ce2-977c-491e-9922-367ade7228ac] Running
	I0923 12:15:45.189481  257312 system_pods.go:89] "metrics-server-84c5f94fbc-2gg67" [60fbd9ca-e159-4eb2-b768-53e1e220cc1f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 12:15:45.189511  257312 system_pods.go:89] "nvidia-device-plugin-daemonset-2st59" [cba3dfd0-b8a8-46d8-9a28-88a2f37b0d2d] Running
	I0923 12:15:45.190736  257312 system_pods.go:89] "registry-66c9cd494c-gs28r" [a4004cb2-7560-45d6-957e-58b28943f86e] Running
	I0923 12:15:45.190821  257312 system_pods.go:89] "registry-proxy-64jpr" [54e0a75a-3fc8-4445-922b-6a5f4489144e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 12:15:45.190874  257312 system_pods.go:89] "snapshot-controller-56fcc65765-6s2d8" [914a9c90-db48-409a-85be-9e7b9a834552] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 12:15:45.190910  257312 system_pods.go:89] "snapshot-controller-56fcc65765-gf4sq" [cca8fb58-87b6-435a-a5b9-b31b646f345c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 12:15:45.190958  257312 system_pods.go:89] "storage-provisioner" [a449dc61-74e2-4059-b06e-90156c2a8a7b] Running
	I0923 12:15:45.191051  257312 system_pods.go:126] duration metric: took 226.250518ms to wait for k8s-apps to be running ...
	I0923 12:15:45.191134  257312 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 12:15:45.191326  257312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:15:45.221552  257312 system_svc.go:56] duration metric: took 30.40749ms WaitForService to wait for kubelet
	I0923 12:15:45.221708  257312 kubeadm.go:582] duration metric: took 41.503593945s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:15:45.221989  257312 node_conditions.go:102] verifying NodePressure condition ...
	I0923 12:15:45.366854  257312 node_conditions.go:122] node storage ephemeral capacity is 119475748Ki
	I0923 12:15:45.366988  257312 node_conditions.go:123] node cpu capacity is 2
	I0923 12:15:45.367069  257312 node_conditions.go:105] duration metric: took 145.007787ms to run NodePressure ...
	I0923 12:15:45.367126  257312 start.go:241] waiting for startup goroutines ...
	I0923 12:15:45.436905  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:45.608256  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:45.609277  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:45.965887  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:46.100509  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:46.104141  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:46.435081  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:46.600977  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:46.601608  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:46.935811  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:47.100463  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:47.105252  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:47.434602  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:47.623818  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:47.626929  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:47.939282  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:48.099243  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:48.102056  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:48.435484  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:48.602143  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:48.604077  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:48.935942  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:49.097402  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:49.100133  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:49.584231  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:49.604472  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:49.608067  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:49.933494  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:50.119474  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:50.120945  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:50.447478  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:50.647262  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:50.658434  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:50.936586  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:51.099637  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:51.103199  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:51.468022  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:51.605282  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:51.607900  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:51.947183  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:52.098483  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:52.099523  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:52.433650  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:52.599780  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:52.601304  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:52.959123  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:53.204783  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:53.207056  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:53.450502  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:53.610098  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:53.619035  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:53.938851  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:54.104037  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:54.108373  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:54.444670  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:54.617120  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:54.631018  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:54.935964  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:55.115116  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:55.116067  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:55.434384  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:55.624545  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:55.641425  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:55.974188  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:56.102529  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:56.104298  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:56.439436  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:56.607333  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:56.620279  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:56.938475  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:57.103653  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:57.105800  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:57.436700  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:57.611858  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:57.621353  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:57.941405  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:58.105394  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:58.113860  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:58.433876  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:58.596491  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:58.611866  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:58.949260  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:59.096451  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:59.096768  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:59.432806  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:15:59.598096  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:15:59.599344  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:15:59.933945  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:00.114179  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 12:16:00.116647  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:00.447594  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:00.608472  257312 kapi.go:107] duration metric: took 28.017929095s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 12:16:00.609043  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:00.940004  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:01.097725  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:01.446789  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:01.599923  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:01.942963  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:02.112795  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:02.435216  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:02.598638  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:02.937152  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:03.101037  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:03.434327  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:03.614810  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:03.949497  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:04.100285  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:04.472773  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:04.600357  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:04.947903  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:05.100060  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:05.434870  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:05.596972  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:05.932436  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:06.260903  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:06.435585  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:06.599017  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:06.935283  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:07.107781  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:07.515168  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:07.599425  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:07.950868  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:08.101473  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:08.443799  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:08.599535  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:08.935452  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:09.117443  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:09.454242  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:09.601675  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:09.947917  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:10.097336  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:10.491814  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:10.742876  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:10.937908  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:11.103841  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:11.459737  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:11.632206  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:11.939838  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:12.104631  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:12.447447  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:12.618467  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:12.937816  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:13.101514  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:13.464176  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:13.605112  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:13.944185  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:14.096864  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:14.439094  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:14.611906  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:14.932471  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:15.099870  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:15.437099  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:15.597777  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:16.046751  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:16.112590  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:16.436043  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:16.640986  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:16.933841  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:17.270109  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:17.436698  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:17.599634  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:17.933858  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:18.107335  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:18.444854  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:18.598059  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:18.966443  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:19.135590  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:19.436915  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:19.622982  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:19.943276  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:20.109711  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:20.444276  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:20.704281  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:20.936029  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:21.097840  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:21.475258  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:21.611956  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:21.934745  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:22.112998  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:22.441250  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:22.608019  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:23.146598  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:23.149011  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:23.450325  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:23.629457  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:23.960127  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:24.103924  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:24.438634  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:24.609171  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:24.939249  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:25.097718  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:25.438692  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:25.608973  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:26.027280  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:26.108954  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:26.434749  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:26.604368  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:26.937271  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:27.114675  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:27.559413  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:27.598618  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:27.943877  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:28.260722  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:28.464117  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:28.647098  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:28.937846  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:29.114958  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:29.453233  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:29.598000  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:29.942501  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:30.184752  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:30.435448  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:30.600909  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:30.934775  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:31.120765  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:31.449260  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:31.597871  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:31.943497  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:32.098489  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:32.467681  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:32.604179  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:33.070810  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:33.188993  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:33.436877  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:33.645425  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:33.961605  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:34.110194  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:34.450511  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:34.656498  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:34.954890  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:35.155222  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:35.437091  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:35.613949  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:35.958678  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:36.128907  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:36.452764  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:36.599293  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:36.938542  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:37.106608  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:37.454431  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:37.597184  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:38.010038  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:38.140439  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:38.451501  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:38.640394  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:38.950847  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:39.117674  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:39.438246  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:39.629716  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:39.978619  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:40.133181  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:40.453495  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:40.606568  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:40.934604  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:41.097700  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:41.883087  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:41.885902  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:41.991346  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:42.100943  257312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 12:16:42.479863  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:42.663263  257312 kapi.go:107] duration metric: took 1m10.072689076s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 12:16:43.034823  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:43.474763  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:43.957339  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:44.446075  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:44.961841  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:45.433789  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:46.012778  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:46.559194  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:46.940125  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 12:16:47.433017  257312 kapi.go:107] duration metric: took 1m12.007079828s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 12:17:00.604239  257312 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 12:17:00.604389  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:17:01.103567  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:17:01.604770  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:17:02.104530  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:17:02.603630  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:17:03.105917  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:17:03.608638  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:17:04.132409  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:17:04.611179  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:17:05.108407  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:17:05.606437  257312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 12:17:06.104839  257312 kapi.go:107] duration metric: took 1m28.005947701s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 12:17:06.107941  257312 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-785680 cluster.
	I0923 12:17:06.110527  257312 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 12:17:06.112890  257312 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 12:17:06.115736  257312 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner-rancher, volcano, storage-provisioner, ingress-dns, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0923 12:17:06.119003  257312 addons.go:510] duration metric: took 2m2.400415636s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner-rancher volcano storage-provisioner ingress-dns inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0923 12:17:06.119101  257312 start.go:246] waiting for cluster config update ...
	I0923 12:17:06.119187  257312 start.go:255] writing updated cluster config ...
	I0923 12:17:06.119824  257312 ssh_runner.go:195] Run: rm -f paused
	I0923 12:17:06.547094  257312 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 12:17:06.550572  257312 out.go:177] * Done! kubectl is now configured to use "addons-785680" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 23 12:26:22 addons-785680 dockerd[1164]: time="2024-09-23T12:26:22.832371548Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=8ba1f8e6b6806eb6 traceID=b4bd60ab87659568d10a5ab2fdb0ce39
	Sep 23 12:26:22 addons-785680 dockerd[1164]: time="2024-09-23T12:26:22.835538238Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=8ba1f8e6b6806eb6 traceID=b4bd60ab87659568d10a5ab2fdb0ce39
	Sep 23 12:26:26 addons-785680 dockerd[1164]: time="2024-09-23T12:26:26.499643560Z" level=info msg="ignoring event" container=84f8590cd296aee7bbfeae31be8d35d9ced7ae63af572fee51ece41438722e69 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:26:26 addons-785680 dockerd[1164]: time="2024-09-23T12:26:26.647072499Z" level=info msg="ignoring event" container=d3272c167151d3ca3daca3be288d53658ae8433e377eded3476c320a84142d27 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:26:38 addons-785680 dockerd[1164]: time="2024-09-23T12:26:38.511894235Z" level=info msg="ignoring event" container=aa0867ccc8912fa6365e5ab310d340b8197ce4593c69f05ade87c5a137cc148e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:26:38 addons-785680 dockerd[1164]: time="2024-09-23T12:26:38.706896382Z" level=info msg="ignoring event" container=6d279df04b4a0e905659ad0efaf625ebad4b7108fa3a00e354b590084119e378 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:26:39 addons-785680 cri-dockerd[1421]: time="2024-09-23T12:26:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/501c1bf37805ff5865cb8e0e1afab172d56505bfd812c6739a3c286a614b54a3/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east1-b.c.p79a29526b6c1e63c-tp.internal c.p79a29526b6c1e63c-tp.internal google.internal options ndots:5]"
	Sep 23 12:26:39 addons-785680 dockerd[1164]: time="2024-09-23T12:26:39.908367638Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" spanID=edbdf69ae5b9c292 traceID=4e4977f393c4e8301018a8c63c0d3d15
	Sep 23 12:26:40 addons-785680 cri-dockerd[1421]: time="2024-09-23T12:26:40Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 23 12:26:40 addons-785680 dockerd[1164]: time="2024-09-23T12:26:40.880549413Z" level=info msg="ignoring event" container=80b7ed5ed6b66963af3ef11cef0a7da89899b667a2e2e82ef7508a90e774b970 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:26:42 addons-785680 dockerd[1164]: time="2024-09-23T12:26:42.605189196Z" level=info msg="ignoring event" container=501c1bf37805ff5865cb8e0e1afab172d56505bfd812c6739a3c286a614b54a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:26:44 addons-785680 cri-dockerd[1421]: time="2024-09-23T12:26:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3f970e2d160d546471830361a4a69071658eb0c3a5b05bd4a3234a5764914423/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east1-b.c.p79a29526b6c1e63c-tp.internal c.p79a29526b6c1e63c-tp.internal google.internal options ndots:5]"
	Sep 23 12:26:45 addons-785680 cri-dockerd[1421]: time="2024-09-23T12:26:45Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Sep 23 12:26:45 addons-785680 dockerd[1164]: time="2024-09-23T12:26:45.972146578Z" level=info msg="ignoring event" container=804ef0afbb8cc4062096f6a5acdb2fb9ad4128a938a265a21ebd5003d20a6d1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:26:48 addons-785680 dockerd[1164]: time="2024-09-23T12:26:48.118734179Z" level=info msg="ignoring event" container=3f970e2d160d546471830361a4a69071658eb0c3a5b05bd4a3234a5764914423 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:26:49 addons-785680 cri-dockerd[1421]: time="2024-09-23T12:26:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c4946852279bcd9841f32a886e642191b403b205532ef93a214e0eb53dc38b53/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east1-b.c.p79a29526b6c1e63c-tp.internal c.p79a29526b6c1e63c-tp.internal google.internal options ndots:5]"
	Sep 23 12:26:50 addons-785680 dockerd[1164]: time="2024-09-23T12:26:50.245534697Z" level=info msg="ignoring event" container=f1a2d7d8638f95212f3d5cedb272dc2e0b890f87c678d22e92edd49b0e0cd04d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:26:52 addons-785680 dockerd[1164]: time="2024-09-23T12:26:52.273486651Z" level=info msg="ignoring event" container=c4946852279bcd9841f32a886e642191b403b205532ef93a214e0eb53dc38b53 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:26:52 addons-785680 dockerd[1164]: time="2024-09-23T12:26:52.822413302Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=c54542a8d9b0ef00 traceID=0f6ed20a7a4bbe9031a3748c6f50bc51
	Sep 23 12:26:52 addons-785680 dockerd[1164]: time="2024-09-23T12:26:52.825519445Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=c54542a8d9b0ef00 traceID=0f6ed20a7a4bbe9031a3748c6f50bc51
	Sep 23 12:27:10 addons-785680 dockerd[1164]: time="2024-09-23T12:27:10.769361573Z" level=info msg="ignoring event" container=0247a67f1190bc2f5750060e45e93541ecfbd40ffb0d9e8e34f2f8a95d04c9aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:27:11 addons-785680 dockerd[1164]: time="2024-09-23T12:27:11.804983248Z" level=info msg="ignoring event" container=5031366ff0bf9428499c9a746e6c537a23ee1e63932008858a9ff249214555f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:27:11 addons-785680 dockerd[1164]: time="2024-09-23T12:27:11.980513652Z" level=info msg="ignoring event" container=ac3f5f2f943a9a0db16cf6a9898a1eac8bccc1051472eb5ed145485f4a93e6c8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:27:12 addons-785680 dockerd[1164]: time="2024-09-23T12:27:12.111924620Z" level=info msg="ignoring event" container=ef123b4a400c26126ab2e4dddb190e4bbdf8bad4722841e9a3b6c56face64ef1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:27:12 addons-785680 dockerd[1164]: time="2024-09-23T12:27:12.345835373Z" level=info msg="ignoring event" container=6d9e69527ba981acf7d00aa1e09afe31bc52c92cef755a876774978ef19349dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	a3b6c1230f320       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            4 minutes ago       Exited              gadget                                   6                   27895cbd2942b       gadget-x9mkh
	b589c06eb725a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 10 minutes ago      Running             gcp-auth                                 0                   fe05e47e7aff3       gcp-auth-89d5ffd79-qcmtd
	e9a3d8e792f41       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   926f9a30000b3       csi-hostpathplugin-lwkth
	ce86c63a181a4       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   926f9a30000b3       csi-hostpathplugin-lwkth
	7f350b2e64f99       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce                             10 minutes ago      Running             controller                               0                   741083f932013       ingress-nginx-controller-bc57996ff-v9xbq
	b29ffa5090d76       ce263a8653f9c                                                                                                                                10 minutes ago      Exited              patch                                    2                   2e15e69f199d5       ingress-nginx-admission-patch-kz6vg
	fb295bebbbc67       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   926f9a30000b3       csi-hostpathplugin-lwkth
	8a59cc1272dfe       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   926f9a30000b3       csi-hostpathplugin-lwkth
	9db907b820d54       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   926f9a30000b3       csi-hostpathplugin-lwkth
	ecd5f51f0b25a       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   5c38144f19bca       csi-hostpath-resizer-0
	e70720718ed01       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   926f9a30000b3       csi-hostpathplugin-lwkth
	b4929d0e2f758       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   a849be7b8b3bc       csi-hostpath-attacher-0
	3a160dc9ca213       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   10 minutes ago      Exited              create                                   0                   5cd93863ff63d       ingress-nginx-admission-create-8cgff
	423fe690c69ea       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   a126e6b57325c       snapshot-controller-56fcc65765-gf4sq
	691fdc47a86c7       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   2fc7f266ceecd       snapshot-controller-56fcc65765-6s2d8
	81d7f285e6da5       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       11 minutes ago      Running             local-path-provisioner                   0                   0df17a4b29ef3       local-path-provisioner-86d989889c-28gm2
	677cca65d9e54       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        11 minutes ago      Running             metrics-server                           0                   a38f9ed64a23d       metrics-server-84c5f94fbc-2gg67
	4bcbed57dabf5       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             11 minutes ago      Running             minikube-ingress-dns                     0                   b2ada1fbc1fe7       kube-ingress-dns-minikube
	f4b12edf15432       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e                               11 minutes ago      Running             cloud-spanner-emulator                   0                   62858afcab08f       cloud-spanner-emulator-5b584cc74-qpvdx
	22bdb62ca5b0d       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   734eed03dab6d       storage-provisioner
	165b4fb7526de       c69fa2e9cbf5f                                                                                                                                12 minutes ago      Running             coredns                                  0                   276ece9cd7764       coredns-7c65d6cfc9-h6nr5
	b646cf5a9569f       60c005f310ff3                                                                                                                                12 minutes ago      Running             kube-proxy                               0                   a348434128b80       kube-proxy-bk2ss
	d6807570fbb83       175ffd71cce3d                                                                                                                                12 minutes ago      Running             kube-controller-manager                  0                   86122f1437b93       kube-controller-manager-addons-785680
	3a088b7be318b       2e96e5913fc06                                                                                                                                12 minutes ago      Running             etcd                                     0                   8933f7cda93ec       etcd-addons-785680
	728307e75b01c       6bab7719df100                                                                                                                                12 minutes ago      Running             kube-apiserver                           0                   1d0b25a3e8ad6       kube-apiserver-addons-785680
	8cd9f80aa4b2b       9aa1fad941575                                                                                                                                12 minutes ago      Running             kube-scheduler                           0                   441275d24c568       kube-scheduler-addons-785680
	
	
	==> controller_ingress [7f350b2e64f9] <==
	W0923 12:16:42.086186       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0923 12:16:42.086527       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0923 12:16:42.094768       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/amd64"
	I0923 12:16:42.343983       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0923 12:16:42.555819       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0923 12:16:42.646185       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0923 12:16:42.695215       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"ebffd95e-6f5b-4fef-b459-00d8899f8954", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0923 12:16:42.705460       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"fa98ddd8-b663-4f07-8985-523efee36472", APIVersion:"v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0923 12:16:42.705548       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"10df6e12-487b-4933-a323-4756263bf6ab", APIVersion:"v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0923 12:16:43.976357       7 nginx.go:317] "Starting NGINX process"
	I0923 12:16:43.977778       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0923 12:16:43.984785       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0923 12:16:43.985092       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0923 12:16:44.069588       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0923 12:16:44.069985       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-v9xbq"
	I0923 12:16:44.098573       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-v9xbq" node="addons-785680"
	I0923 12:16:44.282106       7 controller.go:213] "Backend successfully reloaded"
	I0923 12:16:44.282197       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0923 12:16:44.283450       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-v9xbq", UID:"858f9cb6-75f0-439b-964d-ff32e905b6b2", APIVersion:"v1", ResourceVersion:"799", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [165b4fb7526d] <==
	[INFO] 10.244.0.8:51912 - 48946 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000069838s
	[INFO] 10.244.0.8:46416 - 9786 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000086936s
	[INFO] 10.244.0.8:46416 - 830 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101825s
	[INFO] 10.244.0.8:39697 - 24725 "AAAA IN registry.kube-system.svc.cluster.local.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 99 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000082619s
	[INFO] 10.244.0.8:39697 - 45968 "A IN registry.kube-system.svc.cluster.local.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 99 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000097732s
	[INFO] 10.244.0.8:43783 - 19237 "AAAA IN registry.kube-system.svc.cluster.local.c.p79a29526b6c1e63c-tp.internal. udp 88 false 512" NXDOMAIN qr,aa,rd,ra 193 0.000104342s
	[INFO] 10.244.0.8:43783 - 19233 "A IN registry.kube-system.svc.cluster.local.c.p79a29526b6c1e63c-tp.internal. udp 88 false 512" NXDOMAIN qr,aa,rd,ra 193 0.000272844s
	[INFO] 10.244.0.8:60541 - 292 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000079199s
	[INFO] 10.244.0.8:60541 - 9439 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000132569s
	[INFO] 10.244.0.8:50126 - 30231 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000079925s
	[INFO] 10.244.0.8:50126 - 48914 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00028447s
	[INFO] 10.244.0.25:34173 - 26209 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000452557s
	[INFO] 10.244.0.25:43529 - 12479 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000194119s
	[INFO] 10.244.0.25:41573 - 45477 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000332739s
	[INFO] 10.244.0.25:42933 - 36861 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000198116s
	[INFO] 10.244.0.25:41846 - 19538 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000183255s
	[INFO] 10.244.0.25:44399 - 841 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000174036s
	[INFO] 10.244.0.25:44375 - 15933 "A IN storage.googleapis.com.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 190 0.0036906s
	[INFO] 10.244.0.25:52330 - 43346 "AAAA IN storage.googleapis.com.us-east1-b.c.p79a29526b6c1e63c-tp.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 190 0.006985009s
	[INFO] 10.244.0.25:58905 - 673 "A IN storage.googleapis.com.c.p79a29526b6c1e63c-tp.internal. udp 83 false 1232" NXDOMAIN qr,rd,ra 177 0.003886401s
	[INFO] 10.244.0.25:50987 - 24338 "AAAA IN storage.googleapis.com.c.p79a29526b6c1e63c-tp.internal. udp 83 false 1232" NXDOMAIN qr,rd,ra 177 0.005179306s
	[INFO] 10.244.0.25:35159 - 12483 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003657589s
	[INFO] 10.244.0.25:36103 - 35572 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004198298s
	[INFO] 10.244.0.25:33993 - 59945 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002780697s
	[INFO] 10.244.0.25:60804 - 54410 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005663297s
	
	
	==> describe nodes <==
	Name:               addons-785680
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-785680
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=addons-785680
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T12_14_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-785680
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-785680"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:14:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-785680
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 12:27:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 12:27:03 +0000   Mon, 23 Sep 2024 12:14:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 12:27:03 +0000   Mon, 23 Sep 2024 12:14:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 12:27:03 +0000   Mon, 23 Sep 2024 12:14:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 12:27:03 +0000   Mon, 23 Sep 2024 12:14:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-785680
	Capacity:
	  cpu:                2
	  ephemeral-storage:  119475748Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             8141780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  119475748Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             8141780Ki
	  pods:               110
	System Info:
	  Machine ID:                 c704dea062d84a95ac74f158a0df75e9
	  System UUID:                e44b76cd-166c-4143-86df-d7b7bba73063
	  Boot ID:                    9ddb39eb-7fb7-4050-b5ec-3f6d0c394efa
	  Kernel Version:             6.1.100+
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m20s
	  default                     cloud-spanner-emulator-5b584cc74-qpvdx      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-x9mkh                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-qcmtd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-v9xbq    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-h6nr5                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-lwkth                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-addons-785680                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-785680                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-785680       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-bk2ss                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-785680                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-2gg67             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-6s2d8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-gf4sq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-28gm2     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             460Mi (5%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-785680 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-785680 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node addons-785680 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-785680 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-785680 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-785680 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                node-controller  Node addons-785680 event: Registered Node addons-785680 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 33 70 80 ad 94 08 06
	[  +1.603310] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 6b 87 04 cd ab 08 06
	[  +2.528600] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 5a c6 a7 83 17 08 06
	[  +0.054436] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e e1 3f 2b 6b b0 08 06
	[  +8.329710] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 76 40 96 46 a5 76 08 06
	[  +0.736923] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 4a eb 39 d4 e8 d2 08 06
	[  +0.154245] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 8a 84 e3 a0 aa 08 06
	[  +0.573595] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 c7 c9 c1 e6 bb 08 06
	[  +8.810700] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 c4 e6 70 af 13 08 06
	[  +1.304007] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 98 c1 97 bb f6 08 06
	[  +0.449455] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e 2d 70 1c a9 2e 08 06
	[Sep23 12:17] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 4a ea a8 dc 9e 13 08 06
	[  +0.000768] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 b3 5b 1c dc ba 08 06
	
	
	==> etcd [3a088b7be318] <==
	{"level":"info","ts":"2024-09-23T12:16:41.742925Z","caller":"traceutil/trace.go:171","msg":"trace[1725219289] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1286; }","duration":"140.390722ms","start":"2024-09-23T12:16:41.602523Z","end":"2024-09-23T12:16:41.742913Z","steps":["trace[1725219289] 'agreement among raft nodes before linearized reading'  (duration: 139.968762ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:16:41.743123Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.097299ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T12:16:41.743223Z","caller":"traceutil/trace.go:171","msg":"trace[939059801] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1286; }","duration":"150.199926ms","start":"2024-09-23T12:16:41.593013Z","end":"2024-09-23T12:16:41.743213Z","steps":["trace[939059801] 'agreement among raft nodes before linearized reading'  (duration: 150.079261ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:16:41.744094Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.660899ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-23T12:16:41.744525Z","caller":"traceutil/trace.go:171","msg":"trace[728021778] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1286; }","duration":"300.796965ms","start":"2024-09-23T12:16:41.443421Z","end":"2024-09-23T12:16:41.744218Z","steps":["trace[728021778] 'agreement among raft nodes before linearized reading'  (duration: 300.586017ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:16:41.744760Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T12:16:41.443376Z","time spent":"301.368855ms","remote":"127.0.0.1:57366","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1137,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-09-23T12:16:41.745516Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"316.32024ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T12:16:41.745813Z","caller":"traceutil/trace.go:171","msg":"trace[1209096276] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1286; }","duration":"316.610735ms","start":"2024-09-23T12:16:41.429182Z","end":"2024-09-23T12:16:41.745793Z","steps":["trace[1209096276] 'agreement among raft nodes before linearized reading'  (duration: 316.303055ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:16:41.746028Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T12:16:41.429130Z","time spent":"316.882187ms","remote":"127.0.0.1:57388","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-23T12:16:41.746515Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"353.315126ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-kz6vg\" ","response":"range_response_count:1 size:4561"}
	{"level":"info","ts":"2024-09-23T12:16:41.757239Z","caller":"traceutil/trace.go:171","msg":"trace[1206738176] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-kz6vg; range_end:; response_count:1; response_revision:1286; }","duration":"364.02343ms","start":"2024-09-23T12:16:41.393187Z","end":"2024-09-23T12:16:41.757210Z","steps":["trace[1206738176] 'agreement among raft nodes before linearized reading'  (duration: 353.034447ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:16:41.757367Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T12:16:41.393138Z","time spent":"364.165662ms","remote":"127.0.0.1:57388","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":1,"response size":4585,"request content":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-kz6vg\" "}
	{"level":"warn","ts":"2024-09-23T12:16:46.553234Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.367019ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T12:16:46.553453Z","caller":"traceutil/trace.go:171","msg":"trace[1857316840] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1338; }","duration":"122.600825ms","start":"2024-09-23T12:16:46.430827Z","end":"2024-09-23T12:16:46.553428Z","steps":["trace[1857316840] 'range keys from in-memory index tree'  (duration: 122.280426ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T12:17:34.632092Z","caller":"traceutil/trace.go:171","msg":"trace[218999839] transaction","detail":"{read_only:false; response_revision:1501; number_of_response:1; }","duration":"127.493131ms","start":"2024-09-23T12:17:34.504577Z","end":"2024-09-23T12:17:34.632070Z","steps":["trace[218999839] 'process raft request'  (duration: 126.86364ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T12:24:52.984032Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1870}
	{"level":"info","ts":"2024-09-23T12:24:53.117784Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1870,"took":"132.710291ms","hash":498334252,"current-db-size-bytes":9154560,"current-db-size":"9.2 MB","current-db-size-in-use-bytes":5025792,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-23T12:24:53.118032Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":498334252,"revision":1870,"compact-revision":-1}
	{"level":"warn","ts":"2024-09-23T12:26:06.108987Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"302.733319ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T12:26:06.111107Z","caller":"traceutil/trace.go:171","msg":"trace[752036349] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2416; }","duration":"304.894264ms","start":"2024-09-23T12:26:05.806183Z","end":"2024-09-23T12:26:06.111077Z","steps":["trace[752036349] 'range keys from in-memory index tree'  (duration: 302.717201ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T12:26:06.377520Z","caller":"traceutil/trace.go:171","msg":"trace[40172295] linearizableReadLoop","detail":"{readStateIndex:2576; appliedIndex:2575; }","duration":"111.527346ms","start":"2024-09-23T12:26:06.265960Z","end":"2024-09-23T12:26:06.377488Z","steps":["trace[40172295] 'read index received'  (duration: 111.338909ms)","trace[40172295] 'applied index is now lower than readState.Index'  (duration: 187.881µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T12:26:06.378176Z","caller":"traceutil/trace.go:171","msg":"trace[879492504] transaction","detail":"{read_only:false; response_revision:2417; number_of_response:1; }","duration":"149.264112ms","start":"2024-09-23T12:26:06.228894Z","end":"2024-09-23T12:26:06.378158Z","steps":["trace[879492504] 'process raft request'  (duration: 148.482239ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:26:06.378728Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.736644ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T12:26:06.379525Z","caller":"traceutil/trace.go:171","msg":"trace[1088269648] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2417; }","duration":"113.551767ms","start":"2024-09-23T12:26:06.265948Z","end":"2024-09-23T12:26:06.379500Z","steps":["trace[1088269648] 'agreement among raft nodes before linearized reading'  (duration: 112.720748ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T12:26:06.651134Z","caller":"traceutil/trace.go:171","msg":"trace[2118638989] transaction","detail":"{read_only:false; response_revision:2418; number_of_response:1; }","duration":"259.498788ms","start":"2024-09-23T12:26:06.391612Z","end":"2024-09-23T12:26:06.651111Z","steps":["trace[2118638989] 'process raft request'  (duration: 258.450845ms)"],"step_count":1}
	
	
	==> gcp-auth [b589c06eb725] <==
	2024/09/23 12:17:05 GCP Auth Webhook started!
	2024/09/23 12:17:25 Ready to marshal response ...
	2024/09/23 12:17:25 Ready to write response ...
	2024/09/23 12:17:26 Ready to marshal response ...
	2024/09/23 12:17:26 Ready to write response ...
	2024/09/23 12:17:53 Ready to marshal response ...
	2024/09/23 12:17:53 Ready to write response ...
	2024/09/23 12:17:53 Ready to marshal response ...
	2024/09/23 12:17:53 Ready to write response ...
	2024/09/23 12:17:53 Ready to marshal response ...
	2024/09/23 12:17:53 Ready to write response ...
	2024/09/23 12:26:00 Ready to marshal response ...
	2024/09/23 12:26:00 Ready to write response ...
	2024/09/23 12:26:00 Ready to marshal response ...
	2024/09/23 12:26:00 Ready to write response ...
	2024/09/23 12:26:00 Ready to marshal response ...
	2024/09/23 12:26:00 Ready to write response ...
	2024/09/23 12:26:10 Ready to marshal response ...
	2024/09/23 12:26:10 Ready to write response ...
	2024/09/23 12:26:39 Ready to marshal response ...
	2024/09/23 12:26:39 Ready to write response ...
	2024/09/23 12:26:39 Ready to marshal response ...
	2024/09/23 12:26:39 Ready to write response ...
	2024/09/23 12:26:48 Ready to marshal response ...
	2024/09/23 12:26:48 Ready to write response ...
	
	
	==> kernel <==
	 12:27:14 up  6:36,  0 users,  load average: 0.75, 1.04, 1.68
	Linux addons-785680 6.1.100+ #1 SMP PREEMPT_DYNAMIC Sat Aug 17 14:12:26 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [728307e75b01] <==
	W0923 12:17:00.098119       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.96.51:443: connect: connection refused
	E0923 12:17:00.098177       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.96.51:443: connect: connection refused" logger="UnhandledError"
	I0923 12:17:25.239296       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0923 12:17:25.279282       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0923 12:17:43.724373       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0923 12:17:43.900305       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0923 12:17:44.534634       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0923 12:17:44.666753       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0923 12:17:44.753482       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	W0923 12:17:45.045122       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	I0923 12:17:45.188870       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0923 12:17:45.397483       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0923 12:17:45.441134       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0923 12:17:45.488299       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0923 12:17:46.187616       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0923 12:17:46.189203       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0923 12:17:46.191153       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0923 12:17:46.288627       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0923 12:17:46.489113       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0923 12:17:46.974119       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0923 12:26:00.478523       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.117.35"}
	E0923 12:26:50.010094       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0923 12:26:50.027040       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0923 12:26:50.045643       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0923 12:27:05.042661       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [d6807570fbb8] <==
	E0923 12:26:09.510201       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 12:26:14.254034       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="11.166µs"
	W0923 12:26:18.363761       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:26:18.363858       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:26:18.924837       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:26:18.924892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 12:26:24.508438       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0923 12:26:26.418338       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="6.66µs"
	I0923 12:26:32.767784       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-785680"
	W0923 12:26:35.403830       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:26:35.403894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 12:26:36.644249       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0923 12:26:49.743825       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="6.106µs"
	W0923 12:26:50.754746       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:26:50.754808       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:26:58.973369       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:26:58.973424       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:26:59.064900       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:26:59.064956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 12:27:03.202340       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-785680"
	W0923 12:27:04.484910       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:27:04.485041       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:27:06.600858       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:27:06.600920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 12:27:11.680178       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.857µs"
	
	
	==> kube-proxy [b646cf5a9569] <==
	I0923 12:15:13.466554       1 server_linux.go:66] "Using iptables proxy"
	I0923 12:15:14.710498       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 12:15:14.715901       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 12:15:15.751591       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 12:15:15.752876       1 server_linux.go:169] "Using iptables Proxier"
	I0923 12:15:15.970188       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 12:15:16.060102       1 server.go:483] "Version info" version="v1.31.1"
	I0923 12:15:16.096113       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 12:15:16.173508       1 config.go:199] "Starting service config controller"
	I0923 12:15:16.178285       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 12:15:16.194692       1 config.go:105] "Starting endpoint slice config controller"
	I0923 12:15:16.375794       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 12:15:16.375165       1 config.go:328] "Starting node config controller"
	I0923 12:15:16.375816       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 12:15:16.581507       1 shared_informer.go:320] Caches are synced for node config
	I0923 12:15:16.581560       1 shared_informer.go:320] Caches are synced for service config
	I0923 12:15:16.581623       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8cd9f80aa4b2] <==
	W0923 12:14:56.200627       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 12:14:56.208054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:14:56.201060       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 12:14:56.208144       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:14:57.035481       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 12:14:57.035879       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:14:57.053751       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 12:14:57.054079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:14:57.151362       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 12:14:57.151757       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:14:57.191933       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 12:14:57.192391       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 12:14:57.246357       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 12:14:57.246848       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 12:14:57.253529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 12:14:57.253807       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:14:57.259846       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 12:14:57.260169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:14:57.269861       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 12:14:57.269908       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:14:57.312989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 12:14:57.313557       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 12:14:57.327626       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 12:14:57.327988       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0923 12:14:59.686482       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 12:27:00 addons-785680 kubelet[2178]: I0923 12:27:00.773208    2178 scope.go:117] "RemoveContainer" containerID="a3b6c1230f3205e60ad4db3f8eebb5ea03bf8e437f77634ed395deccfdd3479c"
	Sep 23 12:27:00 addons-785680 kubelet[2178]: E0923 12:27:00.773476    2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-x9mkh_gadget(d976f4c4-fc85-4e7e-8314-7f634729c0a4)\"" pod="gadget/gadget-x9mkh" podUID="d976f4c4-fc85-4e7e-8314-7f634729c0a4"
	Sep 23 12:27:03 addons-785680 kubelet[2178]: E0923 12:27:03.775391    2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="ad71e78e-1160-429e-b243-a217b8d43e6d"
	Sep 23 12:27:08 addons-785680 kubelet[2178]: E0923 12:27:08.776770    2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="3c610ae0-c61d-4d68-ad17-7fd4ea8df8b3"
	Sep 23 12:27:11 addons-785680 kubelet[2178]: I0923 12:27:11.063094    2178 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ad71e78e-1160-429e-b243-a217b8d43e6d-gcp-creds\") pod \"ad71e78e-1160-429e-b243-a217b8d43e6d\" (UID: \"ad71e78e-1160-429e-b243-a217b8d43e6d\") "
	Sep 23 12:27:11 addons-785680 kubelet[2178]: I0923 12:27:11.063382    2178 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mknd8\" (UniqueName: \"kubernetes.io/projected/ad71e78e-1160-429e-b243-a217b8d43e6d-kube-api-access-mknd8\") pod \"ad71e78e-1160-429e-b243-a217b8d43e6d\" (UID: \"ad71e78e-1160-429e-b243-a217b8d43e6d\") "
	Sep 23 12:27:11 addons-785680 kubelet[2178]: I0923 12:27:11.063461    2178 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad71e78e-1160-429e-b243-a217b8d43e6d-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "ad71e78e-1160-429e-b243-a217b8d43e6d" (UID: "ad71e78e-1160-429e-b243-a217b8d43e6d"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 23 12:27:11 addons-785680 kubelet[2178]: I0923 12:27:11.063660    2178 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ad71e78e-1160-429e-b243-a217b8d43e6d-gcp-creds\") on node \"addons-785680\" DevicePath \"\""
	Sep 23 12:27:11 addons-785680 kubelet[2178]: I0923 12:27:11.075798    2178 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad71e78e-1160-429e-b243-a217b8d43e6d-kube-api-access-mknd8" (OuterVolumeSpecName: "kube-api-access-mknd8") pod "ad71e78e-1160-429e-b243-a217b8d43e6d" (UID: "ad71e78e-1160-429e-b243-a217b8d43e6d"). InnerVolumeSpecName "kube-api-access-mknd8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 12:27:11 addons-785680 kubelet[2178]: I0923 12:27:11.164760    2178 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mknd8\" (UniqueName: \"kubernetes.io/projected/ad71e78e-1160-429e-b243-a217b8d43e6d-kube-api-access-mknd8\") on node \"addons-785680\" DevicePath \"\""
	Sep 23 12:27:12 addons-785680 kubelet[2178]: I0923 12:27:12.282193    2178 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkslz\" (UniqueName: \"kubernetes.io/projected/a4004cb2-7560-45d6-957e-58b28943f86e-kube-api-access-lkslz\") pod \"a4004cb2-7560-45d6-957e-58b28943f86e\" (UID: \"a4004cb2-7560-45d6-957e-58b28943f86e\") "
	Sep 23 12:27:12 addons-785680 kubelet[2178]: I0923 12:27:12.288707    2178 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4004cb2-7560-45d6-957e-58b28943f86e-kube-api-access-lkslz" (OuterVolumeSpecName: "kube-api-access-lkslz") pod "a4004cb2-7560-45d6-957e-58b28943f86e" (UID: "a4004cb2-7560-45d6-957e-58b28943f86e"). InnerVolumeSpecName "kube-api-access-lkslz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 12:27:12 addons-785680 kubelet[2178]: I0923 12:27:12.383030    2178 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-lkslz\" (UniqueName: \"kubernetes.io/projected/a4004cb2-7560-45d6-957e-58b28943f86e-kube-api-access-lkslz\") on node \"addons-785680\" DevicePath \"\""
	Sep 23 12:27:12 addons-785680 kubelet[2178]: I0923 12:27:12.584431    2178 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwr66\" (UniqueName: \"kubernetes.io/projected/54e0a75a-3fc8-4445-922b-6a5f4489144e-kube-api-access-pwr66\") pod \"54e0a75a-3fc8-4445-922b-6a5f4489144e\" (UID: \"54e0a75a-3fc8-4445-922b-6a5f4489144e\") "
	Sep 23 12:27:12 addons-785680 kubelet[2178]: I0923 12:27:12.590589    2178 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54e0a75a-3fc8-4445-922b-6a5f4489144e-kube-api-access-pwr66" (OuterVolumeSpecName: "kube-api-access-pwr66") pod "54e0a75a-3fc8-4445-922b-6a5f4489144e" (UID: "54e0a75a-3fc8-4445-922b-6a5f4489144e"). InnerVolumeSpecName "kube-api-access-pwr66". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 12:27:12 addons-785680 kubelet[2178]: I0923 12:27:12.685561    2178 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pwr66\" (UniqueName: \"kubernetes.io/projected/54e0a75a-3fc8-4445-922b-6a5f4489144e-kube-api-access-pwr66\") on node \"addons-785680\" DevicePath \"\""
	Sep 23 12:27:12 addons-785680 kubelet[2178]: I0923 12:27:12.701292    2178 scope.go:117] "RemoveContainer" containerID="ac3f5f2f943a9a0db16cf6a9898a1eac8bccc1051472eb5ed145485f4a93e6c8"
	Sep 23 12:27:12 addons-785680 kubelet[2178]: I0923 12:27:12.737272    2178 scope.go:117] "RemoveContainer" containerID="ac3f5f2f943a9a0db16cf6a9898a1eac8bccc1051472eb5ed145485f4a93e6c8"
	Sep 23 12:27:12 addons-785680 kubelet[2178]: E0923 12:27:12.741364    2178 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: ac3f5f2f943a9a0db16cf6a9898a1eac8bccc1051472eb5ed145485f4a93e6c8" containerID="ac3f5f2f943a9a0db16cf6a9898a1eac8bccc1051472eb5ed145485f4a93e6c8"
	Sep 23 12:27:12 addons-785680 kubelet[2178]: I0923 12:27:12.741427    2178 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ac3f5f2f943a9a0db16cf6a9898a1eac8bccc1051472eb5ed145485f4a93e6c8"} err="failed to get container status \"ac3f5f2f943a9a0db16cf6a9898a1eac8bccc1051472eb5ed145485f4a93e6c8\": rpc error: code = Unknown desc = Error response from daemon: No such container: ac3f5f2f943a9a0db16cf6a9898a1eac8bccc1051472eb5ed145485f4a93e6c8"
	Sep 23 12:27:12 addons-785680 kubelet[2178]: I0923 12:27:12.741461    2178 scope.go:117] "RemoveContainer" containerID="5031366ff0bf9428499c9a746e6c537a23ee1e63932008858a9ff249214555f7"
	Sep 23 12:27:12 addons-785680 kubelet[2178]: I0923 12:27:12.778259    2178 scope.go:117] "RemoveContainer" containerID="5031366ff0bf9428499c9a746e6c537a23ee1e63932008858a9ff249214555f7"
	Sep 23 12:27:12 addons-785680 kubelet[2178]: E0923 12:27:12.779469    2178 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 5031366ff0bf9428499c9a746e6c537a23ee1e63932008858a9ff249214555f7" containerID="5031366ff0bf9428499c9a746e6c537a23ee1e63932008858a9ff249214555f7"
	Sep 23 12:27:12 addons-785680 kubelet[2178]: I0923 12:27:12.779518    2178 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"5031366ff0bf9428499c9a746e6c537a23ee1e63932008858a9ff249214555f7"} err="failed to get container status \"5031366ff0bf9428499c9a746e6c537a23ee1e63932008858a9ff249214555f7\": rpc error: code = Unknown desc = Error response from daemon: No such container: 5031366ff0bf9428499c9a746e6c537a23ee1e63932008858a9ff249214555f7"
	Sep 23 12:27:12 addons-785680 kubelet[2178]: I0923 12:27:12.800882    2178 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad71e78e-1160-429e-b243-a217b8d43e6d" path="/var/lib/kubelet/pods/ad71e78e-1160-429e-b243-a217b8d43e6d/volumes"
	
	
	==> storage-provisioner [22bdb62ca5b0] <==
	I0923 12:15:20.858938       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 12:15:21.165795       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 12:15:21.165936       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 12:15:21.416501       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 12:15:21.434160       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-785680_ae9845c1-63da-417c-8308-00528b6ff791!
	I0923 12:15:21.517273       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0654f147-4113-4100-99d9-bbf81bbe3ba8", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-785680_ae9845c1-63da-417c-8308-00528b6ff791 became leader
	I0923 12:15:22.234116       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-785680_ae9845c1-63da-417c-8308-00528b6ff791!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-785680 -n addons-785680
helpers_test.go:261: (dbg) Run:  kubectl --context addons-785680 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-8cgff ingress-nginx-admission-patch-kz6vg
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-785680 describe pod busybox ingress-nginx-admission-create-8cgff ingress-nginx-admission-patch-kz6vg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-785680 describe pod busybox ingress-nginx-admission-create-8cgff ingress-nginx-admission-patch-kz6vg: exit status 1 (117.157465ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-785680/192.168.49.2
	Start Time:       Mon, 23 Sep 2024 12:17:53 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f5prd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-f5prd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m22s                   default-scheduler  Successfully assigned default/busybox to addons-785680
	  Warning  Failed     8m3s (x6 over 9m20s)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    7m50s (x4 over 9m21s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m50s (x4 over 9m21s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m50s (x4 over 9m21s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m15s (x22 over 9m20s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-8cgff" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kz6vg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-785680 describe pod busybox ingress-nginx-admission-create-8cgff ingress-nginx-admission-patch-kz6vg: exit status 1
--- FAIL: TestAddons/parallel/Registry (76.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (180.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
2024/09/23 12:34:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
E0923 12:34:50.738389  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Non-zero exit: kubectl --context functional-096250 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}: context deadline exceeded (1.456µs)
functional_test_tunnel_test.go:245: nginx-svc svc.status.loadBalancer.ingress never got an IP: context deadline exceeded
functional_test_tunnel_test.go:246: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc
functional_test_tunnel_test.go:250: failed to kubectl get svc nginx-svc:

                                                
                                                
-- stdout --
	NAME        TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
	nginx-svc   LoadBalancer   10.106.234.215   <pending>     80:32695/TCP   3m9s

                                                
                                                
-- /stdout --
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (180.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-096250 /tmp/TestFunctionalparallelMountCmdany-port1325373913/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727094804340555442" to /tmp/TestFunctionalparallelMountCmdany-port1325373913/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727094804340555442" to /tmp/TestFunctionalparallelMountCmdany-port1325373913/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727094804340555442" to /tmp/TestFunctionalparallelMountCmdany-port1325373913/001/test-1727094804340555442
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (693.559976ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:33:25.034638  257293 retry.go:31] will retry after 523.80084ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (555.825492ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:33:26.114686  257293 retry.go:31] will retry after 774.72906ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (390.373598ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:33:27.280135  257293 retry.go:31] will retry after 1.255088023s: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p"
E0923 12:33:28.816345  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (374.251124ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:33:28.909948  257293 retry.go:31] will retry after 2.099886293s: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (437.95107ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:33:31.448517  257293 retry.go:31] will retry after 3.067738536s: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (472.737631ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:125: /mount-9p did not appear within 10.648486486s: exit status 1
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (385.943325ms)

                                                
                                                
-- stdout --
	ls: cannot access '/mount-9p': No such file or directory
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-096250 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "sudo umount -f /mount-9p": exit status 1 (408.818357ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: no mount point specified.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:92: "out/minikube-linux-amd64 -p functional-096250 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-096250 /tmp/TestFunctionalparallelMountCmdany-port1325373913/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-096250 /tmp/TestFunctionalparallelMountCmdany-port1325373913/001:/mount-9p --alsologtostderr -v=1] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-096250 /tmp/TestFunctionalparallelMountCmdany-port1325373913/001:/mount-9p --alsologtostderr -v=1] stderr:
I0923 12:33:24.482406  294374 out.go:345] Setting OutFile to fd 1 ...
I0923 12:33:24.482698  294374 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:33:24.482714  294374 out.go:358] Setting ErrFile to fd 2...
I0923 12:33:24.482723  294374 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:33:24.483006  294374 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/bin
I0923 12:33:24.483438  294374 mustload.go:65] Loading cluster: functional-096250
I0923 12:33:24.484157  294374 config.go:182] Loaded profile config "functional-096250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:33:24.485091  294374 cli_runner.go:164] Run: docker container inspect functional-096250 --format={{.State.Status}}
I0923 12:33:24.523766  294374 host.go:66] Checking if "functional-096250" exists ...
I0923 12:33:24.524467  294374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0923 12:33:24.762119  294374 info.go:266] docker info: {ID:8c091e5d-c8d2-4ae9-9a43-fbe0c7b936d8 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:false NGoroutines:54 SystemTime:2024-09-23 12:33:24.696007689 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I0923 12:33:24.762723  294374 cli_runner.go:164] Run: docker network inspect functional-096250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 12:33:24.835368  294374 out.go:201] 
W0923 12:33:24.837046  294374 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0923 12:33:24.838448  294374 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (11.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (13.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-096250 /tmp/TestFunctionalparallelMountCmdspecific-port2686498712/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (668.893748ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:33:36.566549  257293 retry.go:31] will retry after 290.418267ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (384.363615ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:33:37.242642  257293 retry.go:31] will retry after 554.518585ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (387.757737ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:33:38.185281  257293 retry.go:31] will retry after 1.415107701s: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (445.108957ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:33:40.046720  257293 retry.go:31] will retry after 1.151141772s: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (401.327703ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:33:41.599628  257293 retry.go:31] will retry after 1.432438031s: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (1.039190786s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:33:44.072441  257293 retry.go:31] will retry after 2.323952851s: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (1.084631122s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:253: /mount-9p did not appear within 11.584843024s: exit status 1
functional_test_mount_test.go:220: "TestFunctional/parallel/MountCmd/specific-port" failed, getting debug info...
functional_test_mount_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (1.079616448s)

                                                
                                                
-- stdout --
	ls: cannot access '/mount-9p': No such file or directory
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:223: debugging command "out/minikube-linux-amd64 -p functional-096250 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "sudo umount -f /mount-9p": exit status 1 (518.644514ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: no mount point specified.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-096250 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-096250 /tmp/TestFunctionalparallelMountCmdspecific-port2686498712/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-amd64 mount -p functional-096250 /tmp/TestFunctionalparallelMountCmdspecific-port2686498712/001:/mount-9p --alsologtostderr -v=1 --port 46464] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-amd64 mount -p functional-096250 /tmp/TestFunctionalparallelMountCmdspecific-port2686498712/001:/mount-9p --alsologtostderr -v=1 --port 46464] stderr:
I0923 12:33:36.067012  294970 out.go:345] Setting OutFile to fd 1 ...
I0923 12:33:36.067462  294970 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:33:36.067506  294970 out.go:358] Setting ErrFile to fd 2...
I0923 12:33:36.067527  294970 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:33:36.067887  294970 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/bin
I0923 12:33:36.068514  294970 mustload.go:65] Loading cluster: functional-096250
I0923 12:33:36.069579  294970 config.go:182] Loaded profile config "functional-096250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:33:36.070729  294970 cli_runner.go:164] Run: docker container inspect functional-096250 --format={{.State.Status}}
I0923 12:33:36.127065  294970 host.go:66] Checking if "functional-096250" exists ...
I0923 12:33:36.127821  294970 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0923 12:33:36.362175  294970 info.go:266] docker info: {ID:8c091e5d-c8d2-4ae9-9a43-fbe0c7b936d8 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:false NGoroutines:55 SystemTime:2024-09-23 12:33:36.307184804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I0923 12:33:36.363528  294970 cli_runner.go:164] Run: docker network inspect functional-096250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 12:33:36.409412  294970 out.go:201] 
W0923 12:33:36.411126  294970 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0923 12:33:36.412627  294970 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/specific-port (13.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (14.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-096250 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1764340693/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-096250 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1764340693/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-096250 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1764340693/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T" /mount1: exit status 1 (1.618820563s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:33:50.824697  257293 retry.go:31] will retry after 503.351636ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T" /mount1: exit status 1 (394.346076ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:33:51.722841  257293 retry.go:31] will retry after 931.525742ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T" /mount1: exit status 1 (394.898526ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:33:53.049643  257293 retry.go:31] will retry after 707.265751ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T" /mount1: exit status 1 (379.321481ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:33:54.136896  257293 retry.go:31] will retry after 2.196340289s: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T" /mount1: exit status 1 (380.04559ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:33:56.714192  257293 retry.go:31] will retry after 2.373293851s: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T" /mount1: exit status 1 (391.935726ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:33:59.480558  257293 retry.go:31] will retry after 3.694664712s: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "findmnt -T" /mount1: exit status 1 (396.408495ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:342: mount was not ready in time: exit status 1
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-096250 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1764340693/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-096250 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1764340693/001:/mount1 --alsologtostderr -v=1] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-096250 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1764340693/001:/mount1 --alsologtostderr -v=1] stderr:
I0923 12:33:49.969284  295626 out.go:345] Setting OutFile to fd 1 ...
I0923 12:33:49.985229  295626 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:33:49.985268  295626 out.go:358] Setting ErrFile to fd 2...
I0923 12:33:49.985292  295626 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:33:49.985688  295626 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/bin
I0923 12:33:49.986364  295626 mustload.go:65] Loading cluster: functional-096250
I0923 12:33:49.993234  295626 config.go:182] Loaded profile config "functional-096250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:33:49.997490  295626 cli_runner.go:164] Run: docker container inspect functional-096250 --format={{.State.Status}}
I0923 12:33:50.207880  295626 host.go:66] Checking if "functional-096250" exists ...
I0923 12:33:50.208582  295626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0923 12:33:50.647730  295626 info.go:266] docker info: {ID:8c091e5d-c8d2-4ae9-9a43-fbe0c7b936d8 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:false NGoroutines:54 SystemTime:2024-09-23 12:33:50.537456594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I0923 12:33:50.648012  295626 cli_runner.go:164] Run: docker network inspect functional-096250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 12:33:50.766937  295626 out.go:201] 
W0923 12:33:50.768484  295626 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0923 12:33:50.770299  295626 out.go:201] 
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-096250 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1764340693/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-096250 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1764340693/001:/mount2 --alsologtostderr -v=1] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-096250 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1764340693/001:/mount2 --alsologtostderr -v=1] stderr:
I0923 12:33:49.972863  295627 out.go:345] Setting OutFile to fd 1 ...
I0923 12:33:49.973184  295627 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:33:49.973220  295627 out.go:358] Setting ErrFile to fd 2...
I0923 12:33:49.973243  295627 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:33:49.973655  295627 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/bin
I0923 12:33:49.974195  295627 mustload.go:65] Loading cluster: functional-096250
I0923 12:33:49.974928  295627 config.go:182] Loaded profile config "functional-096250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:33:49.975832  295627 cli_runner.go:164] Run: docker container inspect functional-096250 --format={{.State.Status}}
I0923 12:33:50.110488  295627 host.go:66] Checking if "functional-096250" exists ...
I0923 12:33:50.111033  295627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0923 12:33:50.654499  295627 info.go:266] docker info: {ID:8c091e5d-c8d2-4ae9-9a43-fbe0c7b936d8 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:false NGoroutines:54 SystemTime:2024-09-23 12:33:50.537456594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I0923 12:33:50.654782  295627 cli_runner.go:164] Run: docker network inspect functional-096250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 12:33:50.739863  295627 out.go:201] 
W0923 12:33:50.741580  295627 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0923 12:33:50.743076  295627 out.go:201] 
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-096250 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1764340693/001:/mount3 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-096250 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1764340693/001:/mount3 --alsologtostderr -v=1] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-096250 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1764340693/001:/mount3 --alsologtostderr -v=1] stderr:
I0923 12:33:49.981405  295628 out.go:345] Setting OutFile to fd 1 ...
I0923 12:33:49.981848  295628 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:33:49.981912  295628 out.go:358] Setting ErrFile to fd 2...
I0923 12:33:49.982021  295628 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:33:49.982399  295628 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/bin
I0923 12:33:49.982930  295628 mustload.go:65] Loading cluster: functional-096250
I0923 12:33:49.983785  295628 config.go:182] Loaded profile config "functional-096250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:33:49.984690  295628 cli_runner.go:164] Run: docker container inspect functional-096250 --format={{.State.Status}}
I0923 12:33:50.198686  295628 host.go:66] Checking if "functional-096250" exists ...
I0923 12:33:50.199254  295628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0923 12:33:50.657072  295628 info.go:266] docker info: {ID:8c091e5d-c8d2-4ae9-9a43-fbe0c7b936d8 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:false NGoroutines:54 SystemTime:2024-09-23 12:33:50.537456594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I0923 12:33:50.657414  295628 cli_runner.go:164] Run: docker network inspect functional-096250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 12:33:50.763199  295628 out.go:201] 
W0923 12:33:50.766805  295628 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0923 12:33:50.768683  295628 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/VerifyCleanup (14.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (110.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0923 12:35:49.779581  257293 retry.go:31] will retry after 4.231068557s: Temporary Error: Get "http:": http: no Host in request URL
I0923 12:35:54.011449  257293 retry.go:31] will retry after 3.527734854s: Temporary Error: Get "http:": http: no Host in request URL
I0923 12:35:57.539471  257293 retry.go:31] will retry after 6.058296935s: Temporary Error: Get "http:": http: no Host in request URL
I0923 12:36:03.598057  257293 retry.go:31] will retry after 12.111834676s: Temporary Error: Get "http:": http: no Host in request URL
I0923 12:36:15.711457  257293 retry.go:31] will retry after 22.750831897s: Temporary Error: Get "http:": http: no Host in request URL
I0923 12:36:38.463268  257293 retry.go:31] will retry after 25.093447321s: Temporary Error: Get "http:": http: no Host in request URL
I0923 12:37:03.557560  257293 retry.go:31] will retry after 36.814691019s: Temporary Error: Get "http:": http: no Host in request URL
E0923 12:37:06.873413  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:37:34.580533  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-096250 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
nginx-svc   LoadBalancer   10.106.234.215   <pending>     80:32695/TCP   5m
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (110.70s)

                                                
                                    

Test pass (96/107)

Order passed test Duration
3 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.12
4 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.12
5 TestAddons/Setup 198.99
7 TestAddons/serial/Volcano 46.79
9 TestAddons/serial/GCPAuth/Namespaces 0.22
12 TestAddons/parallel/Ingress 24.08
13 TestAddons/parallel/InspektorGadget 12.19
14 TestAddons/parallel/MetricsServer 6.99
16 TestAddons/parallel/CSI 65.99
17 TestAddons/parallel/Headlamp 20.3
18 TestAddons/parallel/CloudSpanner 5.69
19 TestAddons/parallel/LocalPath 54.23
20 TestAddons/parallel/NvidiaDevicePlugin 6.64
21 TestAddons/parallel/Yakd 12.14
22 TestAddons/StoppedEnableDisable 11.77
25 TestFunctional/serial/CopySyncFile 0.12
26 TestFunctional/serial/StartWithProxy 75.74
27 TestFunctional/serial/AuditLog 0
28 TestFunctional/serial/SoftStart 36.56
29 TestFunctional/serial/KubeContext 0.09
30 TestFunctional/serial/KubectlGetPods 0.12
33 TestFunctional/serial/CacheCmd/cache/add_remote 2.75
34 TestFunctional/serial/CacheCmd/cache/add_local 1.37
35 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.13
36 TestFunctional/serial/CacheCmd/cache/list 0.09
37 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.44
38 TestFunctional/serial/CacheCmd/cache/cache_reload 1.92
39 TestFunctional/serial/CacheCmd/cache/delete 0.18
40 TestFunctional/serial/MinikubeKubectlCmd 1.19
41 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.19
42 TestFunctional/serial/ExtraConfig 54.82
43 TestFunctional/serial/ComponentHealth 0.12
44 TestFunctional/serial/LogsCmd 1.62
45 TestFunctional/serial/LogsFileCmd 1.54
46 TestFunctional/serial/InvalidService 4.56
48 TestFunctional/parallel/ConfigCmd 0.84
49 TestFunctional/parallel/DashboardCmd 16.87
50 TestFunctional/parallel/DryRun 0.74
51 TestFunctional/parallel/InternationalLanguage 0.38
52 TestFunctional/parallel/StatusCmd 1.44
56 TestFunctional/parallel/ServiceCmdConnect 8.98
57 TestFunctional/parallel/AddonsCmd 0.24
58 TestFunctional/parallel/PersistentVolumeClaim 29.04
60 TestFunctional/parallel/SSHCmd 0.78
61 TestFunctional/parallel/CpCmd 3.14
62 TestFunctional/parallel/MySQL 39.83
63 TestFunctional/parallel/FileSync 0.39
64 TestFunctional/parallel/CertSync 3.49
68 TestFunctional/parallel/NodeLabels 0.1
70 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
72 TestFunctional/parallel/License 0.38
73 TestFunctional/parallel/Version/short 0.09
74 TestFunctional/parallel/Version/components 1.46
75 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
76 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
77 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
78 TestFunctional/parallel/ImageCommands/ImageListYaml 0.38
79 TestFunctional/parallel/ImageCommands/ImageBuild 3.23
80 TestFunctional/parallel/ImageCommands/Setup 2.81
81 TestFunctional/parallel/DockerEnv/bash 1.77
82 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.07
83 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
84 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.27
85 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
86 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.49
87 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.47
88 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.74
89 TestFunctional/parallel/ImageCommands/ImageRemove 1.14
90 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.34
91 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.96
92 TestFunctional/parallel/ServiceCmd/DeployApp 26.66
93 TestFunctional/parallel/ServiceCmd/List 0.59
94 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
95 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
96 TestFunctional/parallel/ServiceCmd/Format 0.75
97 TestFunctional/parallel/ServiceCmd/URL 0.63
99 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.83
100 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
102 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.56
104 TestFunctional/parallel/ProfileCmd/profile_not_create 0.63
105 TestFunctional/parallel/ProfileCmd/profile_list 0.64
106 TestFunctional/parallel/ProfileCmd/profile_json_output 0.68
114 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
115 TestFunctional/delete_echo-server_images 0.06
116 TestFunctional/delete_my-image_image 0.03
117 TestFunctional/delete_minikube_cached_images 0.03
122 TestStartStop/group/cloud-shell/serial/FirstStart 78.64
123 TestStartStop/group/cloud-shell/serial/DeployApp 9.39
124 TestStartStop/group/cloud-shell/serial/EnableAddonWhileActive 1.26
125 TestStartStop/group/cloud-shell/serial/Stop 11.16
126 TestStartStop/group/cloud-shell/serial/EnableAddonAfterStop 0.32
127 TestStartStop/group/cloud-shell/serial/SecondStart 284.99
128 TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop 6.01
129 TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop 5.13
130 TestStartStop/group/cloud-shell/serial/VerifyKubernetesImages 0.31
131 TestStartStop/group/cloud-shell/serial/Pause 4.53
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.12s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-785680
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-785680: exit status 85 (120.495529ms)

                                                
                                                
-- stdout --
	* Profile "addons-785680" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-785680"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.12s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.12s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-785680
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-785680: exit status 85 (122.052208ms)

                                                
                                                
-- stdout --
	* Profile "addons-785680" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-785680"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.12s)

                                                
                                    
x
+
TestAddons/Setup (198.99s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-785680 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-785680 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m18.990076096s)
--- PASS: TestAddons/Setup (198.99s)

                                                
                                    
x
+
TestAddons/serial/Volcano (46.79s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:835: volcano-scheduler stabilized in 249.144591ms
addons_test.go:851: volcano-controller stabilized in 249.411174ms
addons_test.go:843: volcano-admission stabilized in 249.463366ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-gxt4r" [a78b8754-fd7c-49f8-b0ca-ffd48819c826] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004391995s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-l5ftb" [e0f65c31-b6bf-4fd3-879a-1f5a61973f89] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.006199145s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-drbjg" [7ee08456-17a0-4a0b-88c9-f68f1fe4e829] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.00539882s
addons_test.go:870: (dbg) Run:  kubectl --context addons-785680 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-785680 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-785680 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [4312240b-b04d-437c-a0f8-90d6c98cfb53] Pending
helpers_test.go:344: "test-job-nginx-0" [4312240b-b04d-437c-a0f8-90d6c98cfb53] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [4312240b-b04d-437c-a0f8-90d6c98cfb53] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 17.00558873s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p addons-785680 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p addons-785680 addons disable volcano --alsologtostderr -v=1: (10.948076348s)
--- PASS: TestAddons/serial/Volcano (46.79s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-785680 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-785680 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (24.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-785680 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-785680 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-785680 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a15eed13-e53c-4b7c-a0bf-bf8fa3268f37] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a15eed13-e53c-4b7c-a0bf-bf8fa3268f37] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.037127904s
I0923 12:27:56.154114  257293 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-785680 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-785680 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:284: (dbg) Done: kubectl --context addons-785680 replace --force -f testdata/ingress-dns-example-v1.yaml: (1.223201653s)
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-785680 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-785680 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-785680 addons disable ingress-dns --alsologtostderr -v=1: (1.703526097s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-785680 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-785680 addons disable ingress --alsologtostderr -v=1: (8.19491543s)
--- PASS: TestAddons/parallel/Ingress (24.08s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.19s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-x9mkh" [d976f4c4-fc85-4e7e-8314-7f634729c0a4] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004887643s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-785680
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-785680: (6.177765304s)
--- PASS: TestAddons/parallel/InspektorGadget (12.19s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.99s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 7.276113ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-2gg67" [60fbd9ca-e159-4eb2-b768-53e1e220cc1f] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004710932s
addons_test.go:413: (dbg) Run:  kubectl --context addons-785680 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-785680 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.99s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.99s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0923 12:27:28.144330  257293 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 12:27:28.150643  257293 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 12:27:28.150675  257293 kapi.go:107] duration metric: took 93.443937ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 93.473961ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-785680 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-785680 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2151cee5-db5d-4b23-95fa-9277fd8beb5f] Pending
helpers_test.go:344: "task-pv-pod" [2151cee5-db5d-4b23-95fa-9277fd8beb5f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2151cee5-db5d-4b23-95fa-9277fd8beb5f] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.006476315s
addons_test.go:528: (dbg) Run:  kubectl --context addons-785680 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-785680 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-785680 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-785680 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-785680 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-785680 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [28e646da-403d-4e91-876f-f74ec46c908f] Pending
helpers_test.go:344: "task-pv-pod-restore" [28e646da-403d-4e91-876f-f74ec46c908f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [28e646da-403d-4e91-876f-f74ec46c908f] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004870989s
addons_test.go:570: (dbg) Run:  kubectl --context addons-785680 delete pod task-pv-pod-restore
addons_test.go:570: (dbg) Done: kubectl --context addons-785680 delete pod task-pv-pod-restore: (1.244589632s)
addons_test.go:574: (dbg) Run:  kubectl --context addons-785680 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-785680 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-785680 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-785680 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.202621922s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-785680 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:586: (dbg) Done: out/minikube-linux-amd64 -p addons-785680 addons disable volumesnapshots --alsologtostderr -v=1: (1.194364768s)
--- PASS: TestAddons/parallel/CSI (65.99s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-785680 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-785680 --alsologtostderr -v=1: (1.251283955s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-f2hjj" [f9c91bae-31b9-46a2-ac04-afe11b2de7c6] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-f2hjj" [f9c91bae-31b9-46a2-ac04-afe11b2de7c6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-f2hjj" [f9c91bae-31b9-46a2-ac04-afe11b2de7c6] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.005199674s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-785680 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-785680 addons disable headlamp --alsologtostderr -v=1: (6.045657563s)
--- PASS: TestAddons/parallel/Headlamp (20.30s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-qpvdx" [5cd3cf00-5bab-492e-9e95-d17b4f8ef8e2] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004415874s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-785680
--- PASS: TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.23s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-785680 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-785680 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-785680 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8d7be718-52be-482e-a023-430b0e747632] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8d7be718-52be-482e-a023-430b0e747632] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8d7be718-52be-482e-a023-430b0e747632] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004160589s
addons_test.go:938: (dbg) Run:  kubectl --context addons-785680 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-785680 ssh "cat /opt/local-path-provisioner/pvc-d6f18bb2-3816-44db-9014-d267a08bbb45_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-785680 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-785680 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-785680 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-amd64 -p addons-785680 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.588471399s)
--- PASS: TestAddons/parallel/LocalPath (54.23s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.64s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2st59" [cba3dfd0-b8a8-46d8-9a28-88a2f37b0d2d] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00883054s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-785680
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.64s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-bp2dc" [5959835c-c443-4a81-abd1-3a570b1251b3] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005573227s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-785680 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-785680 addons disable yakd --alsologtostderr -v=1: (6.132090592s)
--- PASS: TestAddons/parallel/Yakd (12.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.77s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-785680
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-785680: (11.267490984s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-785680
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-785680
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-785680
--- PASS: TestAddons/StoppedEnableDisable (11.77s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/files/etc/test/nested/copy/257293/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.12s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.74s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-096250 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-096250 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m15.737027239s)
--- PASS: TestFunctional/serial/StartWithProxy (75.74s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.56s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 12:30:06.320599  257293 config.go:182] Loaded profile config "functional-096250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-096250 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-096250 --alsologtostderr -v=8: (36.553200009s)
functional_test.go:663: soft start took 36.558725791s for "functional-096250" cluster.
I0923 12:30:42.874284  257293 config.go:182] Loaded profile config "functional-096250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (36.56s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.09s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-096250 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-096250 /tmp/TestFunctionalserialCacheCmdcacheadd_local3467790745/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 cache add minikube-local-cache-test:functional-096250
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 cache delete minikube-local-cache-test:functional-096250
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-096250
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (424.220714ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 kubectl -- --context functional-096250 get pods
functional_test.go:716: (dbg) Done: out/minikube-linux-amd64 -p functional-096250 kubectl -- --context functional-096250 get pods: (1.185180664s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-096250 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.19s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (54.82s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-096250 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-096250 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (54.814859111s)
functional_test.go:761: restart took 54.815009715s for "functional-096250" cluster.
I0923 12:31:46.151977  257293 config.go:182] Loaded profile config "functional-096250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (54.82s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-096250 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-096250 logs: (1.617970499s)
--- PASS: TestFunctional/serial/LogsCmd (1.62s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 logs --file /tmp/TestFunctionalserialLogsFileCmd2170054841/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-096250 logs --file /tmp/TestFunctionalserialLogsFileCmd2170054841/001/logs.txt: (1.538130865s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.56s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-096250 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-096250
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-096250: exit status 115 (726.314965ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31379 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_5b55102efd84289233ffc613c137836b410b4e4d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-096250 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.56s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 config get cpus: exit status 14 (129.411564ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 config get cpus: exit status 14 (139.846943ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-096250 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-096250 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 296543: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.87s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-096250 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-096250 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (297.065059ms)

                                                
                                                
-- stdout --
	* [functional-096250] minikube v1.34.0 on Ubuntu 22.04 (amd64)
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19690-251237/kubeconfig
	  - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19690-251237/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_WANTUPDATENOTIFICATION=false
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:34:03.975973  296242 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:34:03.976263  296242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:34:03.976282  296242 out.go:358] Setting ErrFile to fd 2...
	I0923 12:34:03.976293  296242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:34:03.976617  296242 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/bin
	I0923 12:34:03.977287  296242 out.go:352] Setting JSON to false
	I0923 12:34:03.978405  296242 start.go:129] hostinfo: {"hostname":"cs-905301410258-default","uptime":24221,"bootTime":1727070623,"procs":95,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.1.100+","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"guest","hostId":"88b15d6b-fddc-40bb-b1ad-a90cb2566e38"}
	I0923 12:34:03.978545  296242 start.go:139] virtualization:  guest
	I0923 12:34:03.982273  296242 out.go:177] * [functional-096250] minikube v1.34.0 on Ubuntu 22.04 (amd64)
	I0923 12:34:03.985082  296242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:34:03.985222  296242 notify.go:220] Checking for updates...
	I0923 12:34:03.990675  296242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:34:03.994390  296242 out.go:177]   - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19690-251237/kubeconfig
	I0923 12:34:03.997103  296242 out.go:177]   - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19690-251237/.minikube
	I0923 12:34:04.000713  296242 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 12:34:04.003739  296242 out.go:177]   - MINIKUBE_WANTUPDATENOTIFICATION=false
	I0923 12:34:04.007897  296242 config.go:182] Loaded profile config "functional-096250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:34:04.009378  296242 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:34:04.056160  296242 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0923 12:34:04.056357  296242 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 12:34:04.168804  296242 info.go:266] docker info: {ID:8c091e5d-c8d2-4ae9-9a43-fbe0c7b936d8 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:false NGoroutines:54 SystemTime:2024-09-23 12:34:04.150324269 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 12:34:04.168974  296242 docker.go:318] overlay module found
	I0923 12:34:04.172284  296242 out.go:177] * Using the docker driver based on existing profile
	I0923 12:34:04.174891  296242 start.go:297] selected driver: docker
	I0923 12:34:04.174918  296242 start.go:901] validating driver "docker" against &{Name:functional-096250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-096250 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:34:04.175099  296242 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:34:04.179682  296242 out.go:201] 
	W0923 12:34:04.182828  296242 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 12:34:04.185948  296242 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-096250 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-096250 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-096250 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (378.031434ms)

                                                
                                                
-- stdout --
	* [functional-096250] minikube v1.34.0 sur Ubuntu 22.04 (amd64)
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19690-251237/kubeconfig
	  - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19690-251237/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_WANTUPDATENOTIFICATION=false
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:34:04.815663  296361 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:34:04.816023  296361 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:34:04.816075  296361 out.go:358] Setting ErrFile to fd 2...
	I0923 12:34:04.816106  296361 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:34:04.816903  296361 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/bin
	I0923 12:34:04.817868  296361 out.go:352] Setting JSON to false
	I0923 12:34:04.819381  296361 start.go:129] hostinfo: {"hostname":"cs-905301410258-default","uptime":24222,"bootTime":1727070623,"procs":96,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.1.100+","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"guest","hostId":"88b15d6b-fddc-40bb-b1ad-a90cb2566e38"}
	I0923 12:34:04.819547  296361 start.go:139] virtualization:  guest
	I0923 12:34:04.823723  296361 out.go:177] * [functional-096250] minikube v1.34.0 sur Ubuntu 22.04 (amd64)
	I0923 12:34:04.827365  296361 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:34:04.827699  296361 notify.go:220] Checking for updates...
	I0923 12:34:04.832135  296361 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:34:04.835183  296361 out.go:177]   - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19690-251237/kubeconfig
	I0923 12:34:04.840214  296361 out.go:177]   - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19690-251237/.minikube
	I0923 12:34:04.843655  296361 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 12:34:04.846268  296361 out.go:177]   - MINIKUBE_WANTUPDATENOTIFICATION=false
	I0923 12:34:04.849981  296361 config.go:182] Loaded profile config "functional-096250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:34:04.851095  296361 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:34:04.898121  296361 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0923 12:34:04.898279  296361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 12:34:04.993464  296361 info.go:266] docker info: {ID:8c091e5d-c8d2-4ae9-9a43-fbe0c7b936d8 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:false NGoroutines:54 SystemTime:2024-09-23 12:34:04.976825008 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 12:34:04.993642  296361 docker.go:318] overlay module found
	I0923 12:34:04.997291  296361 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0923 12:34:04.999502  296361 start.go:297] selected driver: docker
	I0923 12:34:04.999526  296361 start.go:901] validating driver "docker" against &{Name:functional-096250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-096250 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:34:04.999674  296361 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:34:05.002660  296361 out.go:201] 
	W0923 12:34:05.005047  296361 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 12:34:05.007648  296361 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-096250 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-096250 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-fnwf2" [02efc16e-2a59-46ca-949a-6edcee1952e3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-fnwf2" [02efc16e-2a59-46ca-949a-6edcee1952e3] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.005271351s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32767
functional_test.go:1675: http://192.168.49.2:32767: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-fnwf2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32767
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.98s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [414d6907-82fb-4940-b6ea-01728fb70ce6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.006903676s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-096250 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-096250 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-096250 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-096250 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3ff7f589-5b8b-49b7-9232-b0ea44d32342] Pending
helpers_test.go:344: "sp-pod" [3ff7f589-5b8b-49b7-9232-b0ea44d32342] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3ff7f589-5b8b-49b7-9232-b0ea44d32342] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.005465954s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-096250 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-096250 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-096250 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [82135107-ad5d-4897-98bd-1332967f85f4] Pending
helpers_test.go:344: "sp-pod" [82135107-ad5d-4897-98bd-1332967f85f4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [82135107-ad5d-4897-98bd-1332967f85f4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005510938s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-096250 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (3.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh -n functional-096250 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 cp functional-096250:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd192170451/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh -n functional-096250 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh -n functional-096250 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (3.14s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (39.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-096250 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-w5c4l" [c5b156d5-a61f-4bac-a475-db66c3f7cf50] Pending
helpers_test.go:344: "mysql-6cdb49bbb-w5c4l" [c5b156d5-a61f-4bac-a475-db66c3f7cf50] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-w5c4l" [c5b156d5-a61f-4bac-a475-db66c3f7cf50] Running
E0923 12:32:27.371387  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 28.006237672s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-096250 exec mysql-6cdb49bbb-w5c4l -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-096250 exec mysql-6cdb49bbb-w5c4l -- mysql -ppassword -e "show databases;": exit status 1 (224.692239ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 12:32:31.442962  257293 retry.go:31] will retry after 1.443352382s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-096250 exec mysql-6cdb49bbb-w5c4l -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-096250 exec mysql-6cdb49bbb-w5c4l -- mysql -ppassword -e "show databases;": exit status 1 (342.151004ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 12:32:33.228835  257293 retry.go:31] will retry after 1.246326925s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-096250 exec mysql-6cdb49bbb-w5c4l -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-096250 exec mysql-6cdb49bbb-w5c4l -- mysql -ppassword -e "show databases;": exit status 1 (267.74051ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 12:32:34.743495  257293 retry.go:31] will retry after 2.415097482s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-096250 exec mysql-6cdb49bbb-w5c4l -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-096250 exec mysql-6cdb49bbb-w5c4l -- mysql -ppassword -e "show databases;": exit status 1 (400.627003ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 12:32:37.560091  257293 retry.go:31] will retry after 4.755223375s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-096250 exec mysql-6cdb49bbb-w5c4l -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (39.83s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/257293/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "sudo cat /etc/test/nested/copy/257293/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (3.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/257293.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "sudo cat /etc/ssl/certs/257293.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/257293.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "sudo cat /usr/share/ca-certificates/257293.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2572932.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "sudo cat /etc/ssl/certs/2572932.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2572932.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "sudo cat /usr/share/ca-certificates/2572932.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (3.49s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-096250 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh "sudo systemctl is-active crio": exit status 1 (581.204496ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-amd64 -p functional-096250 version -o=json --components: (1.454940684s)
--- PASS: TestFunctional/parallel/Version/components (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-096250 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-096250
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-096250
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-096250 image ls --format short --alsologtostderr:
I0923 12:34:24.258525  297195 out.go:345] Setting OutFile to fd 1 ...
I0923 12:34:24.258770  297195 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:34:24.258832  297195 out.go:358] Setting ErrFile to fd 2...
I0923 12:34:24.258861  297195 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:34:24.259113  297195 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/bin
I0923 12:34:24.260003  297195 config.go:182] Loaded profile config "functional-096250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:34:24.260218  297195 config.go:182] Loaded profile config "functional-096250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:34:24.260809  297195 cli_runner.go:164] Run: docker container inspect functional-096250 --format={{.State.Status}}
I0923 12:34:24.289952  297195 ssh_runner.go:195] Run: systemctl --version
I0923 12:34:24.290130  297195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-096250
I0923 12:34:24.319149  297195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32853 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/functional-096250/id_rsa Username:docker}
I0923 12:34:24.421727  297195 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-096250 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kicbase/echo-server               | functional-096250 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-096250 | c263c7629d3ff | 30B    |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| localhost/my-image                          | functional-096250 | 8e81ce91abc82 | 1.24MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-096250 image ls --format table --alsologtostderr:
I0923 12:34:28.480263  297518 out.go:345] Setting OutFile to fd 1 ...
I0923 12:34:28.480449  297518 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:34:28.480465  297518 out.go:358] Setting ErrFile to fd 2...
I0923 12:34:28.480474  297518 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:34:28.480750  297518 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/bin
I0923 12:34:28.481747  297518 config.go:182] Loaded profile config "functional-096250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:34:28.481983  297518 config.go:182] Loaded profile config "functional-096250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:34:28.482619  297518 cli_runner.go:164] Run: docker container inspect functional-096250 --format={{.State.Status}}
I0923 12:34:28.514085  297518 ssh_runner.go:195] Run: systemctl --version
I0923 12:34:28.514268  297518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-096250
I0923 12:34:28.542137  297518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32853 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/functional-096250/id_rsa Username:docker}
I0923 12:34:28.644101  297518 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-096250 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-096250"],"size":"4940000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"c263c7629d3fff937b3e66ac9dd65eb47d720d367fd52fb1f88826c8449ae6a8","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-096250"],"size":"30"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id"
:"8e81ce91abc82c4d7e3a250b55f2f23ec3233c420579cb214d6d74c8a3849741","repoDigests":[],"repoTags":["localhost/my-image:functional-096250"],"size":"1240000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a
0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400
000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-096250 image ls --format json --alsologtostderr:
I0923 12:34:28.180754  297485 out.go:345] Setting OutFile to fd 1 ...
I0923 12:34:28.181016  297485 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:34:28.181040  297485 out.go:358] Setting ErrFile to fd 2...
I0923 12:34:28.181052  297485 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:34:28.181412  297485 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/bin
I0923 12:34:28.182299  297485 config.go:182] Loaded profile config "functional-096250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:34:28.182516  297485 config.go:182] Loaded profile config "functional-096250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:34:28.183135  297485 cli_runner.go:164] Run: docker container inspect functional-096250 --format={{.State.Status}}
I0923 12:34:28.211784  297485 ssh_runner.go:195] Run: systemctl --version
I0923 12:34:28.211994  297485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-096250
I0923 12:34:28.239634  297485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32853 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/functional-096250/id_rsa Username:docker}
I0923 12:34:28.340760  297485 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-096250 image ls --format yaml --alsologtostderr:
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-096250
size: "4940000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: c263c7629d3fff937b3e66ac9dd65eb47d720d367fd52fb1f88826c8449ae6a8
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-096250
size: "30"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-096250 image ls --format yaml --alsologtostderr:
I0923 12:34:24.548054  297230 out.go:345] Setting OutFile to fd 1 ...
I0923 12:34:24.548379  297230 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:34:24.548396  297230 out.go:358] Setting ErrFile to fd 2...
I0923 12:34:24.548406  297230 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:34:24.548774  297230 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/bin
I0923 12:34:24.550006  297230 config.go:182] Loaded profile config "functional-096250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:34:24.550256  297230 config.go:182] Loaded profile config "functional-096250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:34:24.551104  297230 cli_runner.go:164] Run: docker container inspect functional-096250 --format={{.State.Status}}
I0923 12:34:24.609658  297230 ssh_runner.go:195] Run: systemctl --version
I0923 12:34:24.609897  297230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-096250
I0923 12:34:24.671106  297230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32853 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/functional-096250/id_rsa Username:docker}
I0923 12:34:24.801894  297230 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096250 ssh pgrep buildkitd: exit status 1 (400.158224ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 image build -t localhost/my-image:functional-096250 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-096250 image build -t localhost/my-image:functional-096250 testdata/build --alsologtostderr: (2.473901585s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-096250 image build -t localhost/my-image:functional-096250 testdata/build --alsologtostderr:
I0923 12:34:25.336812  297326 out.go:345] Setting OutFile to fd 1 ...
I0923 12:34:25.338031  297326 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:34:25.338057  297326 out.go:358] Setting ErrFile to fd 2...
I0923 12:34:25.338070  297326 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:34:25.338419  297326 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/bin
I0923 12:34:25.339606  297326 config.go:182] Loaded profile config "functional-096250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:34:25.367282  297326 config.go:182] Loaded profile config "functional-096250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:34:25.368335  297326 cli_runner.go:164] Run: docker container inspect functional-096250 --format={{.State.Status}}
I0923 12:34:25.396964  297326 ssh_runner.go:195] Run: systemctl --version
I0923 12:34:25.397072  297326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-096250
I0923 12:34:25.425912  297326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32853 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19690-251237/.minikube/machines/functional-096250/id_rsa Username:docker}
I0923 12:34:25.525994  297326 build_images.go:161] Building image from path: /tmp/build.2531166779.tar
I0923 12:34:25.526262  297326 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0923 12:34:25.541841  297326 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2531166779.tar
I0923 12:34:25.547481  297326 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2531166779.tar: stat -c "%s %y" /var/lib/minikube/build/build.2531166779.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2531166779.tar': No such file or directory
I0923 12:34:25.547522  297326 ssh_runner.go:362] scp /tmp/build.2531166779.tar --> /var/lib/minikube/build/build.2531166779.tar (3072 bytes)
I0923 12:34:25.589293  297326 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2531166779
I0923 12:34:25.604986  297326 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2531166779 -xf /var/lib/minikube/build/build.2531166779.tar
I0923 12:34:25.624643  297326 docker.go:360] Building image: /var/lib/minikube/build/build.2531166779
I0923 12:34:25.624828  297326 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-096250 /var/lib/minikube/build/build.2531166779
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:8e81ce91abc82c4d7e3a250b55f2f23ec3233c420579cb214d6d74c8a3849741 done
#8 naming to localhost/my-image:functional-096250 done
#8 DONE 0.1s
I0923 12:34:27.682911  297326 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-096250 /var/lib/minikube/build/build.2531166779: (2.057998234s)
I0923 12:34:27.683045  297326 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2531166779
I0923 12:34:27.699229  297326 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2531166779.tar
I0923 12:34:27.714944  297326 build_images.go:217] Built localhost/my-image:functional-096250 from /tmp/build.2531166779.tar
I0923 12:34:27.715059  297326 build_images.go:133] succeeded building to: functional-096250
I0923 12:34:27.715160  297326 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.764171513s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-096250
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.81s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-096250 docker-env) && out/minikube-linux-amd64 status -p functional-096250"
functional_test.go:499: (dbg) Done: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-096250 docker-env) && out/minikube-linux-amd64 status -p functional-096250": (1.019196977s)
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-096250 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 image load --daemon kicbase/echo-server:functional-096250 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-096250 image load --daemon kicbase/echo-server:functional-096250 --alsologtostderr: (1.601097894s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 image load --daemon kicbase/echo-server:functional-096250 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p functional-096250 image load --daemon kicbase/echo-server:functional-096250 --alsologtostderr: (1.08298335s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Done: docker pull kicbase/echo-server:latest: (1.08043624s)
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-096250
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 image load --daemon kicbase/echo-server:functional-096250 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 image save kicbase/echo-server:functional-096250 /home/g528047478195_compute/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 image rm kicbase/echo-server:functional-096250 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 image load /home/g528047478195_compute/echo-server-save.tar --alsologtostderr
E0923 12:32:06.873080  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:32:06.879535  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:32:06.890938  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:32:06.912368  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:32:06.953776  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:32:07.035236  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:32:07.196644  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 image ls
E0923 12:32:07.518239  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-096250
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 image save --daemon kicbase/echo-server:functional-096250 --alsologtostderr
E0923 12:32:08.160160  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-096250
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (26.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-096250 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-096250 expose deployment hello-node --type=NodePort --port=8080
E0923 12:32:09.444561  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-xv272" [6bf8804a-67e2-457d-b291-f2f852b6c7af] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E0923 12:32:12.006052  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:32:17.128781  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-6b9f76b5c7-xv272" [6bf8804a-67e2-457d-b291-f2f852b6c7af] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 26.004791133s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (26.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 service list -o json
functional_test.go:1494: Took "496.259965ms" to run "out/minikube-linux-amd64 -p functional-096250 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32747
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-096250 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32747
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-096250 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-096250 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-096250 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-096250 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 292607: os: process already finished
helpers_test.go:502: unable to terminate pid 292505: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-096250 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-096250 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [cdc1619f-8567-4e64-a4aa-2170d2656a25] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [cdc1619f-8567-4e64-a4aa-2170d2656a25] Running
E0923 12:32:47.854443  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.007665677s
I0923 12:32:49.642861  257293 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "549.163415ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "89.56664ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "583.426246ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "90.728881ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-096250 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-096250
--- PASS: TestFunctional/delete_echo-server_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-096250
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-096250
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/FirstStart (78.64s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p cloud-shell-494560 --memory=2200 --alsologtostderr --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p cloud-shell-494560 --memory=2200 --alsologtostderr --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m18.584870294s)
--- PASS: TestStartStop/group/cloud-shell/serial/FirstStart (78.64s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context cloud-shell-494560 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/cloud-shell/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bbffd840-317d-47e2-ad8d-d944b97615ad] Pending
helpers_test.go:344: "busybox" [bbffd840-317d-47e2-ad8d-d944b97615ad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bbffd840-317d-47e2-ad8d-d944b97615ad] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/cloud-shell/serial/DeployApp: integration-test=busybox healthy within 9.006388788s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context cloud-shell-494560 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/cloud-shell/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p cloud-shell-494560 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p cloud-shell-494560 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.073386573s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context cloud-shell-494560 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/cloud-shell/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/Stop (11.16s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p cloud-shell-494560 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p cloud-shell-494560 --alsologtostderr -v=3: (11.156493083s)
--- PASS: TestStartStop/group/cloud-shell/serial/Stop (11.16s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p cloud-shell-494560 -n cloud-shell-494560
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p cloud-shell-494560 -n cloud-shell-494560: exit status 7 (128.104493ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p cloud-shell-494560 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/cloud-shell/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/SecondStart (284.99s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p cloud-shell-494560 --memory=2200 --alsologtostderr --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0923 12:42:03.212017  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/functional-096250/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:42:03.218481  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/functional-096250/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:42:03.230058  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/functional-096250/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:42:03.251619  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/functional-096250/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:42:03.293248  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/functional-096250/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:42:03.374900  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/functional-096250/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:42:03.537241  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/functional-096250/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:42:03.859298  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/functional-096250/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:42:04.500806  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/functional-096250/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:42:05.782850  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/functional-096250/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:42:06.873471  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/addons-785680/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:42:08.344716  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/functional-096250/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:42:13.466363  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/functional-096250/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:42:23.707715  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/functional-096250/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:42:44.189203  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/functional-096250/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:43:25.151607  257293 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19690-251237/.minikube/profiles/functional-096250/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p cloud-shell-494560 --memory=2200 --alsologtostderr --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m44.448929633s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p cloud-shell-494560 -n cloud-shell-494560
--- PASS: TestStartStop/group/cloud-shell/serial/SecondStart (284.99s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-k72mn" [6bc72aac-fe0b-4c71-bf50-3510d63f8a8e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006280398s
--- PASS: TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-k72mn" [6bc72aac-fe0b-4c71-bf50-3510d63f8a8e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004857224s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context cloud-shell-494560 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p cloud-shell-494560 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/cloud-shell/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/Pause (4.53s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p cloud-shell-494560 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p cloud-shell-494560 -n cloud-shell-494560
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p cloud-shell-494560 -n cloud-shell-494560: exit status 2 (489.949043ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p cloud-shell-494560 -n cloud-shell-494560
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p cloud-shell-494560 -n cloud-shell-494560: exit status 2 (436.343033ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p cloud-shell-494560 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p cloud-shell-494560 -n cloud-shell-494560
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p cloud-shell-494560 -n cloud-shell-494560
--- PASS: TestStartStop/group/cloud-shell/serial/Pause (4.53s)

                                                
                                    

Test skip (5/107)

x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
Copied to clipboard