Test Report: Docker_Cloud_Shell 19644

                    
                      c0eea096ace35e11d6c690a668e6718dc1bec60e:2024-09-15:36219
                    
                

Test fail (6/108)

x
+
TestAddons/parallel/Registry (76.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 10.878288ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-tgjvk" [e2cd5872-f5e5-4446-9681-3487f553eae7] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00684205s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ftsrm" [f49b325f-086e-4d70-93ec-6ecea97709a2] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005670944s
addons_test.go:342: (dbg) Run:  kubectl --context addons-353302 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-353302 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-353302 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.147837198s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-353302 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-353302 ip
2024/09/15 06:45:14 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-353302 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-353302
helpers_test.go:235: (dbg) docker inspect addons-353302:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "da39c05fbf8619efdc0a80e1df586760fbc2c4e1172f620f03b9e3f7d135ac80",
	        "Created": "2024-09-15T06:32:42.017660416Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8352,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-15T06:32:42.2144477Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/da39c05fbf8619efdc0a80e1df586760fbc2c4e1172f620f03b9e3f7d135ac80/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/da39c05fbf8619efdc0a80e1df586760fbc2c4e1172f620f03b9e3f7d135ac80/hostname",
	        "HostsPath": "/var/lib/docker/containers/da39c05fbf8619efdc0a80e1df586760fbc2c4e1172f620f03b9e3f7d135ac80/hosts",
	        "LogPath": "/var/lib/docker/containers/da39c05fbf8619efdc0a80e1df586760fbc2c4e1172f620f03b9e3f7d135ac80/da39c05fbf8619efdc0a80e1df586760fbc2c4e1172f620f03b9e3f7d135ac80-json.log",
	        "Name": "/addons-353302",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-353302:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-353302",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8fb2d5b4526c11794866d1cead4555b1464a0f919bf11c5c658b93e17438e1f5-init/diff:/var/lib/docker/overlay2/eaeb8ff8a4289d0b7f083d61682b79338a4ed6429a8670d83e872d329919b31d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8fb2d5b4526c11794866d1cead4555b1464a0f919bf11c5c658b93e17438e1f5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8fb2d5b4526c11794866d1cead4555b1464a0f919bf11c5c658b93e17438e1f5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8fb2d5b4526c11794866d1cead4555b1464a0f919bf11c5c658b93e17438e1f5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-353302",
	                "Source": "/var/lib/docker/volumes/addons-353302/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-353302",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-353302",
	                "name.minikube.sigs.k8s.io": "addons-353302",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7fed274b42a5b1f6ffe6b3c2f46007b078c3d4b13563da5895474d6b5c1a1ace",
	            "SandboxKey": "/var/run/docker/netns/7fed274b42a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-353302": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "9ce2a58ca1e3e71e0e63ec78ef34b414d0aab7abc6c026f43089d1bd8c739ecc",
	                    "EndpointID": "77d323febabf2c2c8449d7b2d8b1da8308e77d657bddf630155fc584c95d4377",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-353302",
	                        "da39c05fbf86"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-353302 -n addons-353302
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-353302 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-353302 logs -n 25: (1.661187558s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|---------------|-----------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |    Profile    |         User          | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|---------------|-----------------------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p                                                                         | addons-353302 | g528047478195_compute | v1.34.0 | 15 Sep 24 06:31 UTC |                     |
	|         | addons-353302                                                                               |               |                       |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-353302 | g528047478195_compute | v1.34.0 | 15 Sep 24 06:31 UTC |                     |
	|         | addons-353302                                                                               |               |                       |         |                     |                     |
	| start   | -p addons-353302 --wait=true                                                                | addons-353302 | g528047478195_compute | v1.34.0 | 15 Sep 24 06:31 UTC | 15 Sep 24 06:35 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |               |                       |         |                     |                     |
	|         | --addons=registry                                                                           |               |                       |         |                     |                     |
	|         | --addons=metrics-server                                                                     |               |                       |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |               |                       |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |               |                       |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |               |                       |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |               |                       |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |               |                       |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |               |                       |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |               |                       |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |               |                       |         |                     |                     |
	|         | --driver=docker                                                                             |               |                       |         |                     |                     |
	|         | --container-runtime=docker                                                                  |               |                       |         |                     |                     |
	|         | --addons=ingress                                                                            |               |                       |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |               |                       |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |               |                       |         |                     |                     |
	| addons  | addons-353302 addons disable                                                                | addons-353302 | g528047478195_compute | v1.34.0 | 15 Sep 24 06:35 UTC | 15 Sep 24 06:35 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |               |                       |         |                     |                     |
	| addons  | addons-353302 addons disable                                                                | addons-353302 | g528047478195_compute | v1.34.0 | 15 Sep 24 06:44 UTC | 15 Sep 24 06:44 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |               |                       |         |                     |                     |
	| addons  | addons-353302 addons                                                                        | addons-353302 | g528047478195_compute | v1.34.0 | 15 Sep 24 06:44 UTC | 15 Sep 24 06:44 UTC |
	|         | disable csi-hostpath-driver                                                                 |               |                       |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |               |                       |         |                     |                     |
	| addons  | addons-353302 addons                                                                        | addons-353302 | g528047478195_compute | v1.34.0 | 15 Sep 24 06:44 UTC | 15 Sep 24 06:44 UTC |
	|         | disable volumesnapshots                                                                     |               |                       |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |               |                       |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-353302 | g528047478195_compute | v1.34.0 | 15 Sep 24 06:44 UTC | 15 Sep 24 06:44 UTC |
	|         | -p addons-353302                                                                            |               |                       |         |                     |                     |
	| ssh     | addons-353302 ssh cat                                                                       | addons-353302 | g528047478195_compute | v1.34.0 | 15 Sep 24 06:45 UTC | 15 Sep 24 06:45 UTC |
	|         | /opt/local-path-provisioner/pvc-59f8d0a8-be52-4426-9cd8-003f857fbb40_default_test-pvc/file1 |               |                       |         |                     |                     |
	| addons  | addons-353302 addons disable                                                                | addons-353302 | g528047478195_compute | v1.34.0 | 15 Sep 24 06:45 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |               |                       |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |               |                       |         |                     |                     |
	| ip      | addons-353302 ip                                                                            | addons-353302 | g528047478195_compute | v1.34.0 | 15 Sep 24 06:45 UTC | 15 Sep 24 06:45 UTC |
	| addons  | addons-353302 addons disable                                                                | addons-353302 | g528047478195_compute | v1.34.0 | 15 Sep 24 06:45 UTC | 15 Sep 24 06:45 UTC |
	|         | registry --alsologtostderr                                                                  |               |                       |         |                     |                     |
	|         | -v=1                                                                                        |               |                       |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|---------------|-----------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:31:52
	Running on machine: cs-905301410258-default
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:31:52.784912    7868 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:31:52.785212    7868 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:31:52.785259    7868 out.go:358] Setting ErrFile to fd 2...
	I0915 06:31:52.785319    7868 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:31:52.785700    7868 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/bin
	W0915 06:31:52.785976    7868 root.go:314] Error reading config file at /home/g528047478195_compute/minikube-integration/19644-430/.minikube/config/config.json: open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/config/config.json: no such file or directory
	I0915 06:31:52.786560    7868 out.go:352] Setting JSON to false
	I0915 06:31:52.789114    7868 start.go:129] hostinfo: {"hostname":"cs-905301410258-default","uptime":2186,"bootTime":1726379726,"procs":20,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.1.100+","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"guest","hostId":"88b15d6b-fddc-40bb-b1ad-a90cb2566e38"}
	I0915 06:31:52.789245    7868 start.go:139] virtualization:  guest
	I0915 06:31:52.794904    7868 out.go:177] * [addons-353302] minikube v1.34.0 on Ubuntu 22.04 (amd64)
	W0915 06:31:52.803643    7868 preload.go:293] Failed to list preload files: open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/cache/preloaded-tarball: no such file or directory
	I0915 06:31:52.803745    7868 notify.go:220] Checking for updates...
	I0915 06:31:52.806363    7868 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:31:52.811522    7868 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:31:52.815630    7868 out.go:177]   - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19644-430/kubeconfig
	I0915 06:31:52.819467    7868 out.go:177]   - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19644-430/.minikube
	I0915 06:31:52.823086    7868 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 06:31:52.825693    7868 out.go:177]   - MINIKUBE_WANTUPDATENOTIFICATION=false
	I0915 06:31:52.829197    7868 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:31:52.871097    7868 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0915 06:31:52.871259    7868 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:31:52.978777    7868 info.go:266] docker info: {ID:efb27d19-1e2c-434b-867e-6d44bc4ed6a4 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:false NGoroutines:59 SystemTime:2024-09-15 06:31:52.962771058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:31:52.978957    7868 docker.go:318] overlay module found
	I0915 06:31:52.988960    7868 out.go:177] * Using the docker driver based on user configuration
	I0915 06:31:52.993895    7868 start.go:297] selected driver: docker
	I0915 06:31:52.993924    7868 start.go:901] validating driver "docker" against <nil>
	I0915 06:31:52.993944    7868 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:31:52.994695    7868 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:31:53.087169    7868 info.go:266] docker info: {ID:efb27d19-1e2c-434b-867e-6d44bc4ed6a4 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:false NGoroutines:59 SystemTime:2024-09-15 06:31:53.071355609 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:31:53.087454    7868 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:31:53.088211    7868 start_flags.go:421] setting extra-config: kubelet.cgroups-per-qos=false
	I0915 06:31:53.088248    7868 start_flags.go:421] setting extra-config: kubelet.enforce-node-allocatable=""
	I0915 06:31:53.088364    7868 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:31:53.091811    7868 out.go:177] * Using Docker driver with root privileges
	I0915 06:31:53.094966    7868 cni.go:84] Creating CNI manager for ""
	I0915 06:31:53.095088    7868 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 06:31:53.095117    7868 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 06:31:53.095268    7868 start.go:340] cluster config:
	{Name:addons-353302 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-353302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:31:53.098572    7868 out.go:177] * Starting "addons-353302" primary control-plane node in "addons-353302" cluster
	I0915 06:31:53.100842    7868 cache.go:121] Beginning downloading kic base image for docker with docker
	I0915 06:31:53.106786    7868 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0915 06:31:53.110163    7868 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 06:31:53.110258    7868 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 06:31:53.133960    7868 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:31:53.134422    7868 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 06:31:53.134582    7868 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:31:53.175748    7868 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0915 06:31:53.175798    7868 cache.go:56] Caching tarball of preloaded images
	I0915 06:31:53.176257    7868 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 06:31:53.179884    7868 out.go:177] * Downloading Kubernetes v1.31.1 preload ...
	I0915 06:31:53.185421    7868 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0915 06:31:53.217500    7868 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /home/g528047478195_compute/minikube-integration/19644-430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0915 06:31:56.705334    7868 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0915 06:31:56.705549    7868 preload.go:254] verifying checksum of /home/g528047478195_compute/minikube-integration/19644-430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0915 06:31:58.342217    7868 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 06:31:58.342759    7868 profile.go:143] Saving config to /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/config.json ...
	I0915 06:31:58.342808    7868 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/config.json: {Name:mk11365d4141f2152f854dea0bbd708cdb422ec4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:32:04.783674    7868 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0915 06:32:04.783734    7868 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0915 06:32:29.581540    7868 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0915 06:32:29.581587    7868 cache.go:194] Successfully downloaded all kic artifacts
	I0915 06:32:29.581651    7868 start.go:360] acquireMachinesLock for addons-353302: {Name:mke6e9a2a3a2847b75fe450c0bce4b034996aae0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 06:32:29.581949    7868 start.go:364] duration metric: took 266.453µs to acquireMachinesLock for "addons-353302"
	I0915 06:32:29.581996    7868 start.go:93] Provisioning new machine with config: &{Name:addons-353302 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-353302 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 06:32:29.582134    7868 start.go:125] createHost starting for "" (driver="docker")
	I0915 06:32:29.586954    7868 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0915 06:32:29.587314    7868 start.go:159] libmachine.API.Create for "addons-353302" (driver="docker")
	I0915 06:32:29.587354    7868 client.go:168] LocalClient.Create starting
	I0915 06:32:29.587504    7868 main.go:141] libmachine: Creating CA: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/certs/ca.pem
	I0915 06:32:29.727611    7868 main.go:141] libmachine: Creating client certificate: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/certs/cert.pem
	I0915 06:32:29.795454    7868 cli_runner.go:164] Run: docker network inspect addons-353302 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0915 06:32:29.822005    7868 cli_runner.go:211] docker network inspect addons-353302 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0915 06:32:29.822145    7868 network_create.go:284] running [docker network inspect addons-353302] to gather additional debugging logs...
	I0915 06:32:29.822177    7868 cli_runner.go:164] Run: docker network inspect addons-353302
	W0915 06:32:29.848397    7868 cli_runner.go:211] docker network inspect addons-353302 returned with exit code 1
	I0915 06:32:29.848442    7868 network_create.go:287] error running [docker network inspect addons-353302]: docker network inspect addons-353302: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-353302 not found
	I0915 06:32:29.848464    7868 network_create.go:289] output of [docker network inspect addons-353302]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-353302 not found
	
	** /stderr **
	I0915 06:32:29.848620    7868 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 06:32:29.874841    7868 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fcfcf0}
	I0915 06:32:29.874899    7868 network_create.go:124] attempt to create docker network addons-353302 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1460 ...
	I0915 06:32:29.874996    7868 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1460 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-353302 addons-353302
	I0915 06:32:29.985701    7868 network_create.go:108] docker network addons-353302 192.168.49.0/24 created
	I0915 06:32:29.985753    7868 kic.go:121] calculated static IP "192.168.49.2" for the "addons-353302" container
	I0915 06:32:29.985878    7868 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0915 06:32:30.011203    7868 cli_runner.go:164] Run: docker volume create addons-353302 --label name.minikube.sigs.k8s.io=addons-353302 --label created_by.minikube.sigs.k8s.io=true
	I0915 06:32:30.038677    7868 oci.go:103] Successfully created a docker volume addons-353302
	I0915 06:32:30.038807    7868 cli_runner.go:164] Run: docker run --rm --name addons-353302-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-353302 --entrypoint /usr/bin/test -v addons-353302:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0915 06:32:34.219166    7868 cli_runner.go:217] Completed: docker run --rm --name addons-353302-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-353302 --entrypoint /usr/bin/test -v addons-353302:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (4.180299736s)
	I0915 06:32:34.219202    7868 oci.go:107] Successfully prepared a docker volume addons-353302
	I0915 06:32:34.219242    7868 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 06:32:34.219271    7868 kic.go:194] Starting extracting preloaded images to volume ...
	I0915 06:32:34.219409    7868 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/g528047478195_compute/minikube-integration/19644-430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-353302:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0915 06:32:41.883037    7868 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/g528047478195_compute/minikube-integration/19644-430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-353302:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (7.663561377s)
	I0915 06:32:41.883081    7868 kic.go:203] duration metric: took 7.663804095s to extract preloaded images to volume ...
	W0915 06:32:41.883253    7868 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0915 06:32:41.883388    7868 oci.go:243] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0915 06:32:41.883501    7868 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0915 06:32:41.994607    7868 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-353302 --name addons-353302 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-353302 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-353302 --network addons-353302 --ip 192.168.49.2 --volume addons-353302:/var --security-opt apparmor=unconfined --memory=4000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0915 06:32:42.439905    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Running}}
	I0915 06:32:42.480551    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:32:42.521753    7868 cli_runner.go:164] Run: docker exec addons-353302 stat /var/lib/dpkg/alternatives/iptables
	I0915 06:32:42.628475    7868 oci.go:144] the created container "addons-353302" has a running status.
	I0915 06:32:42.628517    7868 kic.go:225] Creating ssh key for kic: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa...
	I0915 06:32:43.762550    7868 kic_runner.go:191] docker (temp): /home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0915 06:32:43.818540    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:32:43.872984    7868 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0915 06:32:43.873015    7868 kic_runner.go:114] Args: [docker exec --privileged addons-353302 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0915 06:32:43.980496    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:32:44.018742    7868 machine.go:93] provisionDockerMachine start ...
	I0915 06:32:44.018886    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:32:44.056952    7868 main.go:141] libmachine: Using SSH client type: native
	I0915 06:32:44.057460    7868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:32:44.057500    7868 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 06:32:44.228029    7868 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-353302
	
	I0915 06:32:44.228062    7868 ubuntu.go:169] provisioning hostname "addons-353302"
	I0915 06:32:44.228181    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:32:44.256021    7868 main.go:141] libmachine: Using SSH client type: native
	I0915 06:32:44.256345    7868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:32:44.256369    7868 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-353302 && echo "addons-353302" | sudo tee /etc/hostname
	I0915 06:32:44.426596    7868 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-353302
	
	I0915 06:32:44.426749    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:32:44.455428    7868 main.go:141] libmachine: Using SSH client type: native
	I0915 06:32:44.455741    7868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:32:44.455775    7868 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-353302' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-353302/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-353302' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 06:32:44.607729    7868 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 06:32:44.607871    7868 ubuntu.go:175] set auth options {CertDir:/home/g528047478195_compute/minikube-integration/19644-430/.minikube CaCertPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/certs/ca.pem CaPrivateKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/server.pem ServerKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/server-key.pem ClientKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube}
	I0915 06:32:44.607985    7868 ubuntu.go:177] setting up certificates
	I0915 06:32:44.608024    7868 provision.go:84] configureAuth start
	I0915 06:32:44.608172    7868 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-353302
	I0915 06:32:44.637102    7868 provision.go:143] copyHostCerts
	I0915 06:32:44.637213    7868 exec_runner.go:151] cp: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/certs/ca.pem --> /home/g528047478195_compute/minikube-integration/19644-430/.minikube/ca.pem (1115 bytes)
	I0915 06:32:44.637444    7868 exec_runner.go:151] cp: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/certs/cert.pem --> /home/g528047478195_compute/minikube-integration/19644-430/.minikube/cert.pem (1164 bytes)
	I0915 06:32:44.637636    7868 exec_runner.go:151] cp: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/certs/key.pem --> /home/g528047478195_compute/minikube-integration/19644-430/.minikube/key.pem (1675 bytes)
	I0915 06:32:44.637748    7868 provision.go:117] generating server cert: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/server.pem ca-key=/home/g528047478195_compute/minikube-integration/19644-430/.minikube/certs/ca.pem private-key=/home/g528047478195_compute/minikube-integration/19644-430/.minikube/certs/ca-key.pem org=g528047478195_compute.addons-353302 san=[127.0.0.1 192.168.49.2 addons-353302 localhost minikube]
	I0915 06:32:44.834541    7868 provision.go:177] copyRemoteCerts
	I0915 06:32:44.834669    7868 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 06:32:44.834791    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:32:44.859756    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:32:44.966794    7868 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19644-430/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1115 bytes)
	I0915 06:32:45.004880    7868 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/server.pem --> /etc/docker/server.pem (1245 bytes)
	I0915 06:32:45.043218    7868 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 06:32:45.081497    7868 provision.go:87] duration metric: took 473.41836ms to configureAuth
	I0915 06:32:45.081605    7868 ubuntu.go:193] setting minikube options for container-runtime
	I0915 06:32:45.081936    7868 config.go:182] Loaded profile config "addons-353302": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 06:32:45.082088    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:32:45.107023    7868 main.go:141] libmachine: Using SSH client type: native
	I0915 06:32:45.107372    7868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:32:45.107396    7868 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0915 06:32:45.258182    7868 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0915 06:32:45.258212    7868 ubuntu.go:71] root file system type: overlay
	I0915 06:32:45.258429    7868 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0915 06:32:45.258546    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:32:45.286478    7868 main.go:141] libmachine: Using SSH client type: native
	I0915 06:32:45.286794    7868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:32:45.286931    7868 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0915 06:32:45.460849    7868 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0915 06:32:45.461009    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:32:45.490685    7868 main.go:141] libmachine: Using SSH client type: native
	I0915 06:32:45.490994    7868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:32:45.491031    7868 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0915 06:32:46.607644    7868 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:41.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-15 06:32:45.457996276 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0915 06:32:46.607699    7868 machine.go:96] duration metric: took 2.58892746s to provisionDockerMachine
	I0915 06:32:46.607718    7868 client.go:171] duration metric: took 17.020349901s to LocalClient.Create
	I0915 06:32:46.607747    7868 start.go:167] duration metric: took 17.020436258s to libmachine.API.Create "addons-353302"
	I0915 06:32:46.607760    7868 start.go:293] postStartSetup for "addons-353302" (driver="docker")
	I0915 06:32:46.607778    7868 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 06:32:46.607904    7868 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 06:32:46.607992    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:32:46.636045    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:32:46.745505    7868 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 06:32:46.750867    7868 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 06:32:46.750920    7868 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 06:32:46.750938    7868 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 06:32:46.750951    7868 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0915 06:32:46.750973    7868 filesync.go:126] Scanning /home/g528047478195_compute/minikube-integration/19644-430/.minikube/addons for local assets ...
	I0915 06:32:46.751068    7868 filesync.go:126] Scanning /home/g528047478195_compute/minikube-integration/19644-430/.minikube/files for local assets ...
	I0915 06:32:46.751120    7868 start.go:296] duration metric: took 143.349778ms for postStartSetup
	I0915 06:32:46.751686    7868 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-353302
	I0915 06:32:46.776879    7868 profile.go:143] Saving config to /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/config.json ...
	I0915 06:32:46.777497    7868 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 06:32:46.777587    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:32:46.804628    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:32:46.907003    7868 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0915 06:32:46.914061    7868 start.go:128] duration metric: took 17.331901435s to createHost
	I0915 06:32:46.914314    7868 start.go:83] releasing machines lock for "addons-353302", held for 17.332297113s
	I0915 06:32:46.914506    7868 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-353302
	I0915 06:32:46.942596    7868 ssh_runner.go:195] Run: cat /version.json
	I0915 06:32:46.942700    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:32:46.942840    7868 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 06:32:46.942963    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:32:46.980988    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:32:46.983408    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:32:47.095476    7868 ssh_runner.go:195] Run: systemctl --version
	I0915 06:32:47.212543    7868 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0915 06:32:47.219775    7868 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0915 06:32:47.260666    7868 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0915 06:32:47.260861    7868 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 06:32:47.304708    7868 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0915 06:32:47.304759    7868 start.go:495] detecting cgroup driver to use...
	I0915 06:32:47.304803    7868 detect.go:190] detected "systemd" cgroup driver on host os
	I0915 06:32:47.305058    7868 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 06:32:47.331209    7868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0915 06:32:47.347067    7868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0915 06:32:47.362670    7868 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0915 06:32:47.362895    7868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0915 06:32:47.378719    7868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0915 06:32:47.394169    7868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0915 06:32:47.409251    7868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0915 06:32:47.424760    7868 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 06:32:47.439213    7868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0915 06:32:47.455243    7868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0915 06:32:47.470838    7868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0915 06:32:47.487008    7868 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 06:32:47.501302    7868 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 06:32:47.515178    7868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:32:47.647375    7868 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0915 06:32:47.766497    7868 start.go:495] detecting cgroup driver to use...
	I0915 06:32:47.766555    7868 detect.go:190] detected "systemd" cgroup driver on host os
	I0915 06:32:47.766647    7868 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0915 06:32:47.796802    7868 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0915 06:32:47.796912    7868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0915 06:32:47.827769    7868 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 06:32:47.870086    7868 ssh_runner.go:195] Run: which cri-dockerd
	I0915 06:32:47.878319    7868 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0915 06:32:47.898680    7868 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0915 06:32:47.937274    7868 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0915 06:32:48.179540    7868 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0915 06:32:48.386472    7868 docker.go:574] configuring docker to use "systemd" as cgroup driver...
	I0915 06:32:48.386648    7868 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0915 06:32:48.415238    7868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:32:48.598934    7868 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0915 06:32:49.110945    7868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0915 06:32:49.129558    7868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0915 06:32:49.149380    7868 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0915 06:32:49.290206    7868 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0915 06:32:49.423622    7868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:32:49.561410    7868 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0915 06:32:49.588196    7868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0915 06:32:49.606248    7868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:32:49.740147    7868 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0915 06:32:49.850489    7868 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0915 06:32:49.850930    7868 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0915 06:32:49.859448    7868 start.go:563] Will wait 60s for crictl version
	I0915 06:32:49.859577    7868 ssh_runner.go:195] Run: which crictl
	I0915 06:32:49.867177    7868 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 06:32:49.920693    7868 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0915 06:32:49.920811    7868 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0915 06:32:49.966550    7868 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0915 06:32:50.010621    7868 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0915 06:32:50.010781    7868 cli_runner.go:164] Run: docker network inspect addons-353302 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 06:32:50.035766    7868 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0915 06:32:50.041478    7868 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:32:50.063943    7868 out.go:177]   - kubelet.cgroups-per-qos=false
	I0915 06:32:50.067615    7868 out.go:177]   - kubelet.enforce-node-allocatable=""
	I0915 06:32:50.073274    7868 kubeadm.go:883] updating cluster {Name:addons-353302 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-353302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 06:32:50.073546    7868 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 06:32:50.073682    7868 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 06:32:50.112452    7868 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0915 06:32:50.112481    7868 docker.go:615] Images already preloaded, skipping extraction
	I0915 06:32:50.112604    7868 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 06:32:50.142547    7868 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0915 06:32:50.142588    7868 cache_images.go:84] Images are preloaded, skipping loading
	I0915 06:32:50.142604    7868 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0915 06:32:50.142768    7868 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable="" --hostname-override=addons-353302 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-353302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 06:32:50.142911    7868 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0915 06:32:50.216790    7868 cni.go:84] Creating CNI manager for ""
	I0915 06:32:50.216833    7868 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 06:32:50.216853    7868 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 06:32:50.216883    7868 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-353302 NodeName:addons-353302 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 06:32:50.217143    7868 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-353302"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 06:32:50.217258    7868 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 06:32:50.231811    7868 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 06:32:50.231954    7868 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 06:32:50.246345    7868 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (366 bytes)
	I0915 06:32:50.275327    7868 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 06:32:50.303788    7868 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0915 06:32:50.332821    7868 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0915 06:32:50.338415    7868 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:32:50.356880    7868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:32:50.493789    7868 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:32:50.525304    7868 certs.go:68] Setting up /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302 for IP: 192.168.49.2
	I0915 06:32:50.525334    7868 certs.go:194] generating shared ca certs ...
	I0915 06:32:50.525359    7868 certs.go:226] acquiring lock for ca certs: {Name:mk9734d7d8528324942d7ac525e3945db0f3a44c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:32:50.525684    7868 certs.go:240] generating "minikubeCA" ca cert: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/ca.key
	I0915 06:32:50.758533    7868 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19644-430/.minikube/ca.crt ...
	I0915 06:32:50.758571    7868 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19644-430/.minikube/ca.crt: {Name:mk942cbec0b4ac38f6015496c0c3c80105ed308e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:32:50.759147    7868 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19644-430/.minikube/ca.key ...
	I0915 06:32:50.759183    7868 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19644-430/.minikube/ca.key: {Name:mk0a4ea36d4b01179cdf3b13a8fa9f1ead7ab7da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:32:50.759588    7868 certs.go:240] generating "proxyClientCA" ca cert: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/proxy-client-ca.key
	I0915 06:32:50.866930    7868 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19644-430/.minikube/proxy-client-ca.crt ...
	I0915 06:32:50.866968    7868 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19644-430/.minikube/proxy-client-ca.crt: {Name:mk93e5cf29bf8e0823a3378f451d44f0a1ef340a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:32:50.867416    7868 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19644-430/.minikube/proxy-client-ca.key ...
	I0915 06:32:50.867442    7868 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19644-430/.minikube/proxy-client-ca.key: {Name:mk3da28b2927c5124e2df5285a05c3ae545d8d41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:32:50.867748    7868 certs.go:256] generating profile certs ...
	I0915 06:32:50.867845    7868 certs.go:363] generating signed profile cert for "minikube-user": /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.key
	I0915 06:32:50.867888    7868 crypto.go:68] Generating cert /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt with IP's: []
	I0915 06:32:51.044346    7868 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt ...
	I0915 06:32:51.044390    7868 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt: {Name:mka9374cac39f859d055aaa36947b98ff7877854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:32:51.044852    7868 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.key ...
	I0915 06:32:51.044889    7868 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.key: {Name:mkc85ace3ae00ec3b78044c3b1b367e59706c4aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:32:51.045241    7868 certs.go:363] generating signed profile cert for "minikube": /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/apiserver.key.a2f11473
	I0915 06:32:51.045316    7868 crypto.go:68] Generating cert /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/apiserver.crt.a2f11473 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0915 06:32:51.472952    7868 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/apiserver.crt.a2f11473 ...
	I0915 06:32:51.472990    7868 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/apiserver.crt.a2f11473: {Name:mk59c6604ae3a25255714df729d95ceb5f16de01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:32:51.473441    7868 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/apiserver.key.a2f11473 ...
	I0915 06:32:51.473473    7868 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/apiserver.key.a2f11473: {Name:mk521bd7d54b27f267b4b33e4292ac8604c79b0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:32:51.473774    7868 certs.go:381] copying /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/apiserver.crt.a2f11473 -> /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/apiserver.crt
	I0915 06:32:51.473956    7868 certs.go:385] copying /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/apiserver.key.a2f11473 -> /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/apiserver.key
	I0915 06:32:51.474127    7868 certs.go:363] generating signed profile cert for "aggregator": /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/proxy-client.key
	I0915 06:32:51.474178    7868 crypto.go:68] Generating cert /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/proxy-client.crt with IP's: []
	I0915 06:32:51.675649    7868 crypto.go:156] Writing cert to /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/proxy-client.crt ...
	I0915 06:32:51.675686    7868 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/proxy-client.crt: {Name:mk02f9364198cf0ac1491f8f2e1cf314735221d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:32:51.676090    7868 crypto.go:164] Writing key to /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/proxy-client.key ...
	I0915 06:32:51.676127    7868 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/proxy-client.key: {Name:mk58f3acee32efb213197cdec96dfa2195d5f1f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:32:51.676634    7868 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 06:32:51.676704    7868 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/certs/ca.pem (1115 bytes)
	I0915 06:32:51.676769    7868 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/certs/cert.pem (1164 bytes)
	I0915 06:32:51.676826    7868 certs.go:484] found cert: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/certs/key.pem (1675 bytes)
	I0915 06:32:51.677743    7868 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19644-430/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 06:32:51.717551    7868 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19644-430/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0915 06:32:51.755139    7868 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19644-430/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 06:32:51.793390    7868 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19644-430/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 06:32:51.831421    7868 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0915 06:32:51.872128    7868 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 06:32:51.910231    7868 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 06:32:51.949359    7868 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 06:32:51.994622    7868 ssh_runner.go:362] scp /home/g528047478195_compute/minikube-integration/19644-430/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 06:32:52.051402    7868 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 06:32:52.097084    7868 ssh_runner.go:195] Run: openssl version
	I0915 06:32:52.105509    7868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 06:32:52.122132    7868 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:32:52.128574    7868 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:32 /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:32:52.128678    7868 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:32:52.139083    7868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 06:32:52.156367    7868 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 06:32:52.162379    7868 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 06:32:52.162452    7868 kubeadm.go:392] StartCluster: {Name:addons-353302 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-353302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:32:52.162719    7868 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 06:32:52.189962    7868 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 06:32:52.204840    7868 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 06:32:52.219301    7868 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0915 06:32:52.219446    7868 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 06:32:52.234076    7868 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 06:32:52.234107    7868 kubeadm.go:157] found existing configuration files:
	
	I0915 06:32:52.234253    7868 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 06:32:52.248724    7868 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 06:32:52.248919    7868 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 06:32:52.262728    7868 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 06:32:52.277491    7868 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 06:32:52.277719    7868 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 06:32:52.291540    7868 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 06:32:52.305821    7868 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 06:32:52.306025    7868 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 06:32:52.320798    7868 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 06:32:52.335272    7868 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 06:32:52.335561    7868 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 06:32:52.349569    7868 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0915 06:32:52.406047    7868 kubeadm.go:310] W0915 06:32:52.404865    1683 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:32:52.407033    7868 kubeadm.go:310] W0915 06:32:52.406089    1683 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:32:52.531633    7868 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 06:33:05.462950    7868 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0915 06:33:05.463049    7868 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 06:33:05.463226    7868 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 06:33:05.463448    7868 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 06:33:05.463624    7868 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0915 06:33:05.463753    7868 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 06:33:05.470555    7868 out.go:235]   - Generating certificates and keys ...
	I0915 06:33:05.470698    7868 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 06:33:05.470863    7868 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 06:33:05.471005    7868 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0915 06:33:05.471141    7868 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0915 06:33:05.471294    7868 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0915 06:33:05.471432    7868 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0915 06:33:05.471538    7868 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0915 06:33:05.471799    7868 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-353302 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 06:33:05.471917    7868 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0915 06:33:05.472131    7868 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-353302 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 06:33:05.472304    7868 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0915 06:33:05.472433    7868 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0915 06:33:05.472527    7868 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0915 06:33:05.472657    7868 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 06:33:05.472763    7868 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 06:33:05.472860    7868 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0915 06:33:05.472956    7868 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 06:33:05.473121    7868 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 06:33:05.473229    7868 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 06:33:05.473392    7868 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 06:33:05.473512    7868 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 06:33:05.476428    7868 out.go:235]   - Booting up control plane ...
	I0915 06:33:05.476587    7868 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 06:33:05.476715    7868 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 06:33:05.476831    7868 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 06:33:05.477043    7868 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 06:33:05.477223    7868 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 06:33:05.477328    7868 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 06:33:05.477570    7868 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0915 06:33:05.477766    7868 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0915 06:33:05.477883    7868 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001256351s
	I0915 06:33:05.478040    7868 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0915 06:33:05.478155    7868 kubeadm.go:310] [api-check] The API server is healthy after 6.502866501s
	I0915 06:33:05.478345    7868 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0915 06:33:05.478526    7868 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0915 06:33:05.478632    7868 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0915 06:33:05.478924    7868 kubeadm.go:310] [mark-control-plane] Marking the node addons-353302 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0915 06:33:05.479031    7868 kubeadm.go:310] [bootstrap-token] Using token: 3fy7e7.asmvrpcxlbnr2ogi
	I0915 06:33:05.490807    7868 out.go:235]   - Configuring RBAC rules ...
	I0915 06:33:05.491050    7868 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0915 06:33:05.491221    7868 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0915 06:33:05.491495    7868 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0915 06:33:05.491754    7868 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0915 06:33:05.492025    7868 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0915 06:33:05.492266    7868 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0915 06:33:05.492536    7868 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0915 06:33:05.492630    7868 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0915 06:33:05.492730    7868 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0915 06:33:05.492745    7868 kubeadm.go:310] 
	I0915 06:33:05.492917    7868 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0915 06:33:05.492952    7868 kubeadm.go:310] 
	I0915 06:33:05.493159    7868 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0915 06:33:05.493177    7868 kubeadm.go:310] 
	I0915 06:33:05.493269    7868 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0915 06:33:05.493451    7868 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0915 06:33:05.493598    7868 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0915 06:33:05.493639    7868 kubeadm.go:310] 
	I0915 06:33:05.493806    7868 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0915 06:33:05.493823    7868 kubeadm.go:310] 
	I0915 06:33:05.493945    7868 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0915 06:33:05.493960    7868 kubeadm.go:310] 
	I0915 06:33:05.494059    7868 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0915 06:33:05.494236    7868 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0915 06:33:05.494430    7868 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0915 06:33:05.494446    7868 kubeadm.go:310] 
	I0915 06:33:05.494604    7868 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0915 06:33:05.494762    7868 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0915 06:33:05.494777    7868 kubeadm.go:310] 
	I0915 06:33:05.494925    7868 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3fy7e7.asmvrpcxlbnr2ogi \
	I0915 06:33:05.495118    7868 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ddfc0312a903acd0f539840ee56f9695f04f87ba4a350b20216c762d6157098e \
	I0915 06:33:05.495160    7868 kubeadm.go:310] 	--control-plane 
	I0915 06:33:05.495172    7868 kubeadm.go:310] 
	I0915 06:33:05.495346    7868 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0915 06:33:05.495377    7868 kubeadm.go:310] 
	I0915 06:33:05.495533    7868 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3fy7e7.asmvrpcxlbnr2ogi \
	I0915 06:33:05.495719    7868 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ddfc0312a903acd0f539840ee56f9695f04f87ba4a350b20216c762d6157098e 
	I0915 06:33:05.495760    7868 cni.go:84] Creating CNI manager for ""
	I0915 06:33:05.495788    7868 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 06:33:05.502191    7868 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0915 06:33:05.508696    7868 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0915 06:33:05.524314    7868 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0915 06:33:05.556461    7868 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 06:33:05.556581    7868 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:33:05.556719    7868 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-353302 minikube.k8s.io/updated_at=2024_09_15T06_33_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=addons-353302 minikube.k8s.io/primary=true
	I0915 06:33:05.771718    7868 ops.go:34] apiserver oom_adj: -16
	I0915 06:33:05.771897    7868 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:33:06.272249    7868 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:33:06.772693    7868 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:33:07.272903    7868 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:33:07.772413    7868 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:33:08.272449    7868 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:33:08.772380    7868 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:33:09.272561    7868 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:33:09.772120    7868 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:33:09.898240    7868 kubeadm.go:1113] duration metric: took 4.341779175s to wait for elevateKubeSystemPrivileges
	I0915 06:33:09.898306    7868 kubeadm.go:394] duration metric: took 17.735859329s to StartCluster
	I0915 06:33:09.898342    7868 settings.go:142] acquiring lock: {Name:mk1e2649e48b9a8574006d29625639cd0ae67c1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:33:09.898665    7868 settings.go:150] Updating kubeconfig:  /home/g528047478195_compute/minikube-integration/19644-430/kubeconfig
	I0915 06:33:09.899508    7868 lock.go:35] WriteFile acquiring /home/g528047478195_compute/minikube-integration/19644-430/kubeconfig: {Name:mk854dba8ef3c1bc5982252c02048976ed919a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:33:09.899986    7868 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 06:33:09.900170    7868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 06:33:09.900687    7868 config.go:182] Loaded profile config "addons-353302": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 06:33:09.900739    7868 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0915 06:33:09.900875    7868 addons.go:69] Setting yakd=true in profile "addons-353302"
	I0915 06:33:09.900907    7868 addons.go:234] Setting addon yakd=true in "addons-353302"
	I0915 06:33:09.900965    7868 host.go:66] Checking if "addons-353302" exists ...
	I0915 06:33:09.901409    7868 addons.go:69] Setting inspektor-gadget=true in profile "addons-353302"
	I0915 06:33:09.901434    7868 addons.go:234] Setting addon inspektor-gadget=true in "addons-353302"
	I0915 06:33:09.901467    7868 host.go:66] Checking if "addons-353302" exists ...
	I0915 06:33:09.902270    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:33:09.903062    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:33:09.905076    7868 addons.go:69] Setting metrics-server=true in profile "addons-353302"
	I0915 06:33:09.905109    7868 addons.go:234] Setting addon metrics-server=true in "addons-353302"
	I0915 06:33:09.905148    7868 host.go:66] Checking if "addons-353302" exists ...
	I0915 06:33:09.905920    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:33:09.906407    7868 addons.go:69] Setting cloud-spanner=true in profile "addons-353302"
	I0915 06:33:09.906453    7868 addons.go:234] Setting addon cloud-spanner=true in "addons-353302"
	I0915 06:33:09.906503    7868 host.go:66] Checking if "addons-353302" exists ...
	I0915 06:33:09.907439    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:33:09.911933    7868 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-353302"
	I0915 06:33:09.912019    7868 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-353302"
	I0915 06:33:09.912073    7868 host.go:66] Checking if "addons-353302" exists ...
	I0915 06:33:09.913065    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:33:09.918403    7868 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-353302"
	I0915 06:33:09.918493    7868 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-353302"
	I0915 06:33:09.918574    7868 host.go:66] Checking if "addons-353302" exists ...
	I0915 06:33:09.921137    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:33:09.918778    7868 addons.go:69] Setting registry=true in profile "addons-353302"
	I0915 06:33:09.932881    7868 addons.go:234] Setting addon registry=true in "addons-353302"
	I0915 06:33:09.932959    7868 host.go:66] Checking if "addons-353302" exists ...
	I0915 06:33:09.933903    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:33:09.931418    7868 addons.go:69] Setting default-storageclass=true in profile "addons-353302"
	I0915 06:33:09.946687    7868 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-353302"
	I0915 06:33:09.947313    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:33:09.918791    7868 addons.go:69] Setting storage-provisioner=true in profile "addons-353302"
	I0915 06:33:09.949525    7868 addons.go:234] Setting addon storage-provisioner=true in "addons-353302"
	I0915 06:33:09.949594    7868 host.go:66] Checking if "addons-353302" exists ...
	I0915 06:33:09.950410    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:33:09.918804    7868 addons.go:69] Setting volcano=true in profile "addons-353302"
	I0915 06:33:09.959594    7868 addons.go:234] Setting addon volcano=true in "addons-353302"
	I0915 06:33:09.959682    7868 host.go:66] Checking if "addons-353302" exists ...
	I0915 06:33:09.960674    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:33:09.918809    7868 addons.go:69] Setting volumesnapshots=true in profile "addons-353302"
	I0915 06:33:09.918977    7868 out.go:177] * Verifying Kubernetes components...
	I0915 06:33:09.931434    7868 addons.go:69] Setting gcp-auth=true in profile "addons-353302"
	I0915 06:33:09.931445    7868 addons.go:69] Setting helm-tiller=true in profile "addons-353302"
	I0915 06:33:09.931487    7868 addons.go:69] Setting ingress=true in profile "addons-353302"
	I0915 06:33:09.931493    7868 addons.go:69] Setting ingress-dns=true in profile "addons-353302"
	I0915 06:33:09.918798    7868 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-353302"
	I0915 06:33:10.006101    7868 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-353302"
	I0915 06:33:10.007021    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:33:10.025097    7868 mustload.go:65] Loading cluster: addons-353302
	I0915 06:33:10.025551    7868 config.go:182] Loaded profile config "addons-353302": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 06:33:10.026124    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:33:10.038456    7868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:33:10.100399    7868 addons.go:234] Setting addon volumesnapshots=true in "addons-353302"
	I0915 06:33:10.100544    7868 host.go:66] Checking if "addons-353302" exists ...
	I0915 06:33:10.101720    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:33:10.104515    7868 addons.go:234] Setting addon helm-tiller=true in "addons-353302"
	I0915 06:33:10.104670    7868 host.go:66] Checking if "addons-353302" exists ...
	I0915 06:33:10.105710    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:33:10.136572    7868 addons.go:234] Setting addon ingress=true in "addons-353302"
	I0915 06:33:10.136712    7868 host.go:66] Checking if "addons-353302" exists ...
	I0915 06:33:10.137669    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:33:10.179404    7868 addons.go:234] Setting addon ingress-dns=true in "addons-353302"
	I0915 06:33:10.179593    7868 host.go:66] Checking if "addons-353302" exists ...
	I0915 06:33:10.220023    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:33:10.338366    7868 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0915 06:33:10.356896    7868 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0915 06:33:10.359469    7868 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0915 06:33:10.359576    7868 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0915 06:33:10.359825    7868 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0915 06:33:10.359992    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:33:10.368651    7868 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0915 06:33:10.368683    7868 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0915 06:33:10.368785    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:33:10.372762    7868 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0915 06:33:10.372858    7868 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0915 06:33:10.373055    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:33:10.468675    7868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0915 06:33:10.492831    7868 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0915 06:33:10.501466    7868 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0915 06:33:10.501500    7868 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0915 06:33:10.501638    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:33:10.526335    7868 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 06:33:10.545048    7868 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:33:10.545161    7868 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 06:33:10.547458    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:33:10.579352    7868 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0915 06:33:10.588091    7868 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:33:10.588208    7868 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0915 06:33:10.588401    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:33:10.627873    7868 out.go:177]   - Using image docker.io/registry:2.8.3
	I0915 06:33:10.654727    7868 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0915 06:33:10.657454    7868 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0915 06:33:10.657541    7868 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0915 06:33:10.660879    7868 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0915 06:33:10.661173    7868 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0915 06:33:10.661233    7868 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0915 06:33:10.661408    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:33:10.685420    7868 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0915 06:33:10.690754    7868 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0915 06:33:10.703627    7868 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0915 06:33:10.703976    7868 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:33:10.704062    7868 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0915 06:33:10.704240    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:33:10.743371    7868 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0915 06:33:10.757462    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:33:10.766909    7868 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0915 06:33:10.773196    7868 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0915 06:33:10.790910    7868 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0915 06:33:10.790948    7868 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0915 06:33:10.791079    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:33:10.798774    7868 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:33:10.857776    7868 addons.go:234] Setting addon default-storageclass=true in "addons-353302"
	I0915 06:33:10.857831    7868 host.go:66] Checking if "addons-353302" exists ...
	I0915 06:33:10.858633    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:33:10.874438    7868 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0915 06:33:10.880036    7868 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0915 06:33:10.880149    7868 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0915 06:33:10.880329    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:33:10.935698    7868 host.go:66] Checking if "addons-353302" exists ...
	I0915 06:33:10.994685    7868 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0915 06:33:10.994721    7868 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0915 06:33:11.097256    7868 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0915 06:33:11.100097    7868 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0915 06:33:11.100127    7868 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0915 06:33:11.100252    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:33:11.124104    7868 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:33:11.124566    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:33:11.128555    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:33:11.144409    7868 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0915 06:33:11.152456    7868 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:33:11.156656    7868 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:33:11.156686    7868 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0915 06:33:11.156814    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:33:11.165057    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:33:11.183443    7868 cli_runner.go:217] Completed: docker container inspect addons-353302 --format={{.State.Status}}: (1.17629555s)
	I0915 06:33:11.184832    7868 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-353302"
	I0915 06:33:11.184897    7868 host.go:66] Checking if "addons-353302" exists ...
	I0915 06:33:11.185877    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:33:11.209493    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:33:11.215946    7868 cli_runner.go:217] Completed: docker container inspect addons-353302 --format={{.State.Status}}: (1.148467359s)
	I0915 06:33:11.219024    7868 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0915 06:33:11.222141    7868 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0915 06:33:11.231620    7868 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0915 06:33:11.238022    7868 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0915 06:33:11.238056    7868 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0915 06:33:11.238163    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:33:11.353020    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:33:11.375569    7868 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302: (1.006747192s)
	I0915 06:33:11.375610    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:33:11.392122    7868 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:33:11.392153    7868 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0915 06:33:11.436143    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:33:11.460433    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:33:11.520154    7868 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 06:33:11.520185    7868 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 06:33:11.520321    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:33:11.560338    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:33:11.635451    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:33:11.641368    7868 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0915 06:33:11.644248    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:33:11.659770    7868 out.go:177]   - Using image docker.io/busybox:stable
	I0915 06:33:11.663638    7868 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:33:11.663671    7868 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0915 06:33:11.663777    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:33:11.666832    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:33:11.667026    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:33:11.733701    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:33:11.869652    7868 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:33:12.034921    7868 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0915 06:33:12.034952    7868 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0915 06:33:12.083741    7868 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0915 06:33:12.083856    7868 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0915 06:33:12.127623    7868 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0915 06:33:12.275510    7868 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:33:12.374313    7868 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0915 06:33:12.374348    7868 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0915 06:33:12.465120    7868 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0915 06:33:12.465154    7868 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0915 06:33:12.536696    7868 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:33:12.567367    7868 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0915 06:33:12.567426    7868 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0915 06:33:12.647470    7868 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:33:12.708765    7868 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:33:12.772270    7868 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0915 06:33:12.772318    7868 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0915 06:33:12.799439    7868 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0915 06:33:13.074818    7868 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0915 06:33:13.074847    7868 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0915 06:33:13.100973    7868 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0915 06:33:13.101026    7868 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0915 06:33:13.147241    7868 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0915 06:33:13.147352    7868 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0915 06:33:13.281076    7868 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 06:33:13.309252    7868 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0915 06:33:13.309309    7868 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0915 06:33:13.347910    7868 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:33:13.347952    7868 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0915 06:33:13.382304    7868 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:33:13.449714    7868 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0915 06:33:13.449748    7868 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0915 06:33:13.527971    7868 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0915 06:33:13.528004    7868 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0915 06:33:13.544438    7868 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0915 06:33:13.544473    7868 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0915 06:33:13.645479    7868 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0915 06:33:13.645514    7868 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0915 06:33:13.664350    7868 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.195625577s)
	I0915 06:33:13.664389    7868 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0915 06:33:13.666052    7868 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.867165089s)
	I0915 06:33:13.667309    7868 node_ready.go:35] waiting up to 6m0s for node "addons-353302" to be "Ready" ...
	I0915 06:33:13.674918    7868 node_ready.go:49] node "addons-353302" has status "Ready":"True"
	I0915 06:33:13.674945    7868 node_ready.go:38] duration metric: took 7.605817ms for node "addons-353302" to be "Ready" ...
	I0915 06:33:13.674960    7868 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:33:13.729978    7868 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cwrgz" in "kube-system" namespace to be "Ready" ...
	I0915 06:33:13.773387    7868 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0915 06:33:13.773424    7868 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0915 06:33:13.796075    7868 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:33:13.920776    7868 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0915 06:33:13.962674    7868 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0915 06:33:13.962711    7868 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0915 06:33:14.033890    7868 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:33:14.033923    7868 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0915 06:33:14.065229    7868 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0915 06:33:14.065274    7868 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0915 06:33:14.314881    7868 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0915 06:33:14.314923    7868 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0915 06:33:14.342345    7868 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-353302" context rescaled to 1 replicas
	I0915 06:33:14.498752    7868 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0915 06:33:14.498787    7868 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0915 06:33:14.666508    7868 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0915 06:33:14.666541    7868 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0915 06:33:14.675964    7868 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:33:14.923494    7868 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:33:14.923528    7868 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0915 06:33:15.177591    7868 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0915 06:33:15.177629    7868 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0915 06:33:15.344542    7868 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0915 06:33:15.344579    7868 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0915 06:33:15.471433    7868 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:33:15.595390    7868 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0915 06:33:15.595424    7868 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0915 06:33:16.010191    7868 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:33:16.010226    7868 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0915 06:33:16.304449    7868 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0915 06:33:16.304482    7868 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0915 06:33:16.557780    7868 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:33:16.619439    7868 pod_ready.go:103] pod "coredns-7c65d6cfc9-cwrgz" in "kube-system" namespace has status "Ready":"False"
	I0915 06:33:16.791145    7868 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0915 06:33:16.791180    7868 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0915 06:33:17.457709    7868 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0915 06:33:17.457736    7868 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0915 06:33:18.131733    7868 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:33:18.131763    7868 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0915 06:33:18.421779    7868 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:33:18.796380    7868 pod_ready.go:103] pod "coredns-7c65d6cfc9-cwrgz" in "kube-system" namespace has status "Ready":"False"
	I0915 06:33:21.299702    7868 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.429921117s)
	I0915 06:33:21.299762    7868 addons.go:475] Verifying addon registry=true in "addons-353302"
	I0915 06:33:21.300072    7868 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.172323292s)
	I0915 06:33:21.300182    7868 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.024561754s)
	I0915 06:33:21.307500    7868 out.go:177] * Verifying registry addon...
	I0915 06:33:21.319567    7868 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0915 06:33:21.985973    7868 pod_ready.go:103] pod "coredns-7c65d6cfc9-cwrgz" in "kube-system" namespace has status "Ready":"False"
	I0915 06:33:22.009336    7868 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 06:33:22.009364    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:22.651354    7868 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 06:33:22.651383    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:22.894416    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:23.100070    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:23.506916    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:24.574214    7868 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0915 06:33:24.574483    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:33:24.643631    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:33:24.646629    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:24.847447    7868 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0915 06:33:24.921250    7868 pod_ready.go:103] pod "coredns-7c65d6cfc9-cwrgz" in "kube-system" namespace has status "Ready":"False"
	I0915 06:33:24.931185    7868 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.394415557s)
	I0915 06:33:24.948562    7868 addons.go:234] Setting addon gcp-auth=true in "addons-353302"
	I0915 06:33:24.948713    7868 host.go:66] Checking if "addons-353302" exists ...
	I0915 06:33:24.950072    7868 cli_runner.go:164] Run: docker container inspect addons-353302 --format={{.State.Status}}
	I0915 06:33:24.998415    7868 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0915 06:33:24.998526    7868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-353302
	I0915 06:33:25.047192    7868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/addons-353302/id_rsa Username:docker}
	I0915 06:33:25.100007    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:25.449455    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:26.020782    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:26.317720    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:26.803836    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:26.999060    7868 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (14.351541691s)
	I0915 06:33:27.070605    7868 pod_ready.go:103] pod "coredns-7c65d6cfc9-cwrgz" in "kube-system" namespace has status "Ready":"False"
	I0915 06:33:27.235073    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:27.459579    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:28.158630    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:28.758589    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:29.034449    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:29.133302    7868 pod_ready.go:103] pod "coredns-7c65d6cfc9-cwrgz" in "kube-system" namespace has status "Ready":"False"
	I0915 06:33:30.007031    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:30.129147    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:30.742774    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:31.124526    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:31.149164    7868 pod_ready.go:103] pod "coredns-7c65d6cfc9-cwrgz" in "kube-system" namespace has status "Ready":"False"
	I0915 06:33:31.462021    7868 pod_ready.go:98] pod "coredns-7c65d6cfc9-cwrgz" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-15 06:33:29 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-15 06:33:11 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-15 06:33:11 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-15 06:33:11 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-15 06:33:11 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-15 06:33:11 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-15 06:33:17 +0000 UTC,FinishedAt:2024-09-15 06:33:29 +0000 UTC,ContainerID:docker://916c3eafcb7d8b0fda9cc771b18b5d2458ce0f0484bc06b1a492449bda825232,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://916c3eafcb7d8b0fda9cc771b18b5d2458ce0f0484bc06b1a492449bda825232 Started:0xc019b64f30 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc019b17cd0} {Name:kube-api-access-95k27 MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0xc019b17ce0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0915 06:33:31.462186    7868 pod_ready.go:82] duration metric: took 17.732166443s for pod "coredns-7c65d6cfc9-cwrgz" in "kube-system" namespace to be "Ready" ...
	E0915 06:33:31.462263    7868 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-cwrgz" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-15 06:33:29 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-15 06:33:11 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-15 06:33:11 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-15 06:33:11 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-15 06:33:11 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-15 06:33:11 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-15 06:33:17 +0000 UTC,FinishedAt:2024-09-15 06:33:29 +0000 UTC,ContainerID:docker://916c3eafcb7d8b0fda9cc771b18b5d2458ce0f0484bc06b1a492449bda825232,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://916c3eafcb7d8b0fda9cc771b18b5d2458ce0f0484bc06b1a492449bda825232 Started:0xc019b64f30 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc019b17cd0} {Name:kube-api-access-95k27 MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc019b17ce0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0915 06:33:31.462323    7868 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ql82t" in "kube-system" namespace to be "Ready" ...
	I0915 06:33:31.577233    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:31.998102    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:32.528730    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:32.951430    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:33.504868    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:33.751782    7868 pod_ready.go:103] pod "coredns-7c65d6cfc9-ql82t" in "kube-system" namespace has status "Ready":"False"
	I0915 06:33:34.247377    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:34.671297    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:34.987728    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:35.627130    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:35.760816    7868 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (23.052000595s)
	I0915 06:33:35.761014    7868 addons.go:475] Verifying addon ingress=true in "addons-353302"
	I0915 06:33:35.764363    7868 out.go:177] * Verifying ingress addon...
	I0915 06:33:35.772752    7868 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0915 06:33:35.944295    7868 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0915 06:33:35.944397    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:35.947092    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:36.465332    7868 pod_ready.go:103] pod "coredns-7c65d6cfc9-ql82t" in "kube-system" namespace has status "Ready":"False"
	I0915 06:33:36.842742    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:36.859103    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:37.243462    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:37.243562    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:37.461858    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:37.463577    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:37.941017    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:37.965679    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:38.378312    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:38.398153    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:38.516304    7868 pod_ready.go:103] pod "coredns-7c65d6cfc9-ql82t" in "kube-system" namespace has status "Ready":"False"
	I0915 06:33:38.872940    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:38.873906    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:39.412938    7868 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (26.131814353s)
	I0915 06:33:39.413301    7868 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (26.030935688s)
	I0915 06:33:39.413440    7868 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (25.61732102s)
	I0915 06:33:39.413460    7868 addons.go:475] Verifying addon metrics-server=true in "addons-353302"
	I0915 06:33:39.413534    7868 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (25.492720249s)
	I0915 06:33:39.413642    7868 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (24.737634602s)
	I0915 06:33:39.414108    7868 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (23.942620566s)
	W0915 06:33:39.414160    7868 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:33:39.414211    7868 retry.go:31] will retry after 309.940258ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:33:39.414437    7868 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (22.856613065s)
	I0915 06:33:39.414853    7868 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (26.615380797s)
	I0915 06:33:39.421065    7868 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-353302 service yakd-dashboard -n yakd-dashboard
	
	I0915 06:33:39.690664    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:39.695627    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:39.725033    7868 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:33:40.331304    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:40.336747    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:40.533114    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:40.534620    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:40.564197    7868 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (15.565690154s)
	I0915 06:33:40.564387    7868 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (22.142549798s)
	I0915 06:33:40.564597    7868 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-353302"
	I0915 06:33:40.567651    7868 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0915 06:33:40.567842    7868 out.go:177] * Verifying csi-hostpath-driver addon...
	I0915 06:33:40.570539    7868 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:33:40.573414    7868 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0915 06:33:40.580098    7868 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0915 06:33:40.580148    7868 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0915 06:33:40.677644    7868 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0915 06:33:40.677677    7868 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0915 06:33:40.788664    7868 pod_ready.go:103] pod "coredns-7c65d6cfc9-ql82t" in "kube-system" namespace has status "Ready":"False"
	I0915 06:33:40.915017    7868 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:33:40.915052    7868 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0915 06:33:41.074044    7868 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 06:33:41.074078    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:41.098225    7868 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:33:41.391421    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:41.392935    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:41.701469    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:41.704618    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:41.705514    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:41.806579    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:42.223734    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:42.225764    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:42.299083    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:42.566997    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:42.601040    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:42.696236    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:42.996669    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:43.036711    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:43.247357    7868 pod_ready.go:103] pod "coredns-7c65d6cfc9-ql82t" in "kube-system" namespace has status "Ready":"False"
	I0915 06:33:43.373514    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:43.532460    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:43.533841    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:43.669531    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:43.828962    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:43.928827    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:44.171842    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:44.308322    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:44.458336    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:44.625002    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:44.797562    7868 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.072432182s)
	I0915 06:33:44.797806    7868 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.699552727s)
	I0915 06:33:44.803397    7868 addons.go:475] Verifying addon gcp-auth=true in "addons-353302"
	I0915 06:33:44.806526    7868 out.go:177] * Verifying gcp-auth addon...
	I0915 06:33:44.811227    7868 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0915 06:33:44.838410    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:44.852411    7868 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0915 06:33:44.865723    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:45.093446    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:45.282266    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:45.326222    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:45.477819    7868 pod_ready.go:103] pod "coredns-7c65d6cfc9-ql82t" in "kube-system" namespace has status "Ready":"False"
	I0915 06:33:45.629007    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:45.891354    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:45.892697    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:46.083404    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:46.278505    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:46.324367    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:46.587395    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:46.785954    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:46.843620    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:47.090814    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:47.288946    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:47.330718    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:47.478543    7868 pod_ready.go:103] pod "coredns-7c65d6cfc9-ql82t" in "kube-system" namespace has status "Ready":"False"
	I0915 06:33:47.603148    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:47.793263    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:47.826591    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:48.082308    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:48.282492    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:48.326764    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:48.673125    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:48.823190    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:48.868165    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:49.081607    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:49.281472    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:49.325299    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:49.580498    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:49.781371    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:49.831579    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:49.975312    7868 pod_ready.go:103] pod "coredns-7c65d6cfc9-ql82t" in "kube-system" namespace has status "Ready":"False"
	I0915 06:33:50.081810    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:50.281635    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:50.326459    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:50.588135    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:50.786186    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:50.830554    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:51.086417    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:51.282604    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:51.326378    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:51.584421    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:51.781995    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:51.826961    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:52.084953    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:52.284459    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:52.326821    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:52.472074    7868 pod_ready.go:103] pod "coredns-7c65d6cfc9-ql82t" in "kube-system" namespace has status "Ready":"False"
	I0915 06:33:52.584035    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:52.781111    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:52.826412    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:53.083866    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:53.280337    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:53.325742    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:53.582046    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:53.782298    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:53.825152    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:54.081969    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:54.286729    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:54.331009    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:54.475477    7868 pod_ready.go:103] pod "coredns-7c65d6cfc9-ql82t" in "kube-system" namespace has status "Ready":"False"
	I0915 06:33:54.582401    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:54.781982    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:54.837000    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:54.986333    7868 pod_ready.go:93] pod "coredns-7c65d6cfc9-ql82t" in "kube-system" namespace has status "Ready":"True"
	I0915 06:33:54.986452    7868 pod_ready.go:82] duration metric: took 23.524082105s for pod "coredns-7c65d6cfc9-ql82t" in "kube-system" namespace to be "Ready" ...
	I0915 06:33:54.986539    7868 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-353302" in "kube-system" namespace to be "Ready" ...
	I0915 06:33:55.033936    7868 pod_ready.go:93] pod "etcd-addons-353302" in "kube-system" namespace has status "Ready":"True"
	I0915 06:33:55.034052    7868 pod_ready.go:82] duration metric: took 47.46749ms for pod "etcd-addons-353302" in "kube-system" namespace to be "Ready" ...
	I0915 06:33:55.034129    7868 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-353302" in "kube-system" namespace to be "Ready" ...
	I0915 06:33:55.081741    7868 pod_ready.go:93] pod "kube-apiserver-addons-353302" in "kube-system" namespace has status "Ready":"True"
	I0915 06:33:55.081842    7868 pod_ready.go:82] duration metric: took 47.658992ms for pod "kube-apiserver-addons-353302" in "kube-system" namespace to be "Ready" ...
	I0915 06:33:55.081882    7868 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-353302" in "kube-system" namespace to be "Ready" ...
	I0915 06:33:55.089275    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:55.096469    7868 pod_ready.go:93] pod "kube-controller-manager-addons-353302" in "kube-system" namespace has status "Ready":"True"
	I0915 06:33:55.096582    7868 pod_ready.go:82] duration metric: took 14.644855ms for pod "kube-controller-manager-addons-353302" in "kube-system" namespace to be "Ready" ...
	I0915 06:33:55.096623    7868 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-skpck" in "kube-system" namespace to be "Ready" ...
	I0915 06:33:55.126346    7868 pod_ready.go:93] pod "kube-proxy-skpck" in "kube-system" namespace has status "Ready":"True"
	I0915 06:33:55.126457    7868 pod_ready.go:82] duration metric: took 29.759394ms for pod "kube-proxy-skpck" in "kube-system" namespace to be "Ready" ...
	I0915 06:33:55.126531    7868 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-353302" in "kube-system" namespace to be "Ready" ...
	I0915 06:33:55.280779    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:55.391097    7868 pod_ready.go:93] pod "kube-scheduler-addons-353302" in "kube-system" namespace has status "Ready":"True"
	I0915 06:33:55.391242    7868 pod_ready.go:82] duration metric: took 264.663063ms for pod "kube-scheduler-addons-353302" in "kube-system" namespace to be "Ready" ...
	I0915 06:33:55.391317    7868 pod_ready.go:39] duration metric: took 41.716338276s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:33:55.391422    7868 api_server.go:52] waiting for apiserver process to appear ...
	I0915 06:33:55.391612    7868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 06:33:55.422420    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:55.438505    7868 api_server.go:72] duration metric: took 45.538468542s to wait for apiserver process to appear ...
	I0915 06:33:55.438573    7868 api_server.go:88] waiting for apiserver healthz status ...
	I0915 06:33:55.438618    7868 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0915 06:33:55.446541    7868 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0915 06:33:55.447941    7868 api_server.go:141] control plane version: v1.31.1
	I0915 06:33:55.447982    7868 api_server.go:131] duration metric: took 9.390547ms to wait for apiserver health ...
	I0915 06:33:55.447997    7868 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 06:33:55.580659    7868 system_pods.go:59] 18 kube-system pods found
	I0915 06:33:55.580792    7868 system_pods.go:61] "coredns-7c65d6cfc9-ql82t" [ff9e496f-0a2f-490c-ae6c-e971c7156288] Running
	I0915 06:33:55.580850    7868 system_pods.go:61] "csi-hostpath-attacher-0" [c27ed90d-32a8-47ac-9fcb-3b49c1506e1a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0915 06:33:55.580893    7868 system_pods.go:61] "csi-hostpath-resizer-0" [15ac3a9b-35ba-4a2b-9a47-513729bad607] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0915 06:33:55.580933    7868 system_pods.go:61] "csi-hostpathplugin-zwlrm" [9669dbb6-735d-434b-bcea-d4c108a96655] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0915 06:33:55.580996    7868 system_pods.go:61] "etcd-addons-353302" [ce402ffb-7500-403b-8d50-dbec44aabdcd] Running
	I0915 06:33:55.581023    7868 system_pods.go:61] "kube-apiserver-addons-353302" [7e57e85d-e0dd-4a4b-b504-43f6a03d8993] Running
	I0915 06:33:55.581047    7868 system_pods.go:61] "kube-controller-manager-addons-353302" [f60a324c-5fdb-405d-b5c7-c47c9c427bd5] Running
	I0915 06:33:55.581113    7868 system_pods.go:61] "kube-ingress-dns-minikube" [7e10931d-12c7-4b17-a00a-10e7cf170b75] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0915 06:33:55.581154    7868 system_pods.go:61] "kube-proxy-skpck" [e9d9c13d-232e-4a9c-8b60-a8189b808aac] Running
	I0915 06:33:55.581180    7868 system_pods.go:61] "kube-scheduler-addons-353302" [9373a0b8-63cc-4b79-9194-d3bc6ce23acf] Running
	I0915 06:33:55.581206    7868 system_pods.go:61] "metrics-server-84c5f94fbc-tl5pc" [60336706-ef3c-4e47-b8a8-853c524e5125] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 06:33:55.581250    7868 system_pods.go:61] "nvidia-device-plugin-daemonset-cqvk8" [1b48cd7f-dab4-4c09-ac78-8c9ed2c3e699] Running
	I0915 06:33:55.581298    7868 system_pods.go:61] "registry-66c9cd494c-tgjvk" [e2cd5872-f5e5-4446-9681-3487f553eae7] Running
	I0915 06:33:55.581330    7868 system_pods.go:61] "registry-proxy-ftsrm" [f49b325f-086e-4d70-93ec-6ecea97709a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0915 06:33:55.581383    7868 system_pods.go:61] "snapshot-controller-56fcc65765-mszv6" [34e7195b-64c5-4682-b5ea-a4e6b3a90ec8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:33:55.581428    7868 system_pods.go:61] "snapshot-controller-56fcc65765-rbtfl" [366a05fe-37d8-4d3b-8c90-95a4d44d002f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:33:55.581455    7868 system_pods.go:61] "storage-provisioner" [3a45373b-2939-482c-a018-dbd8f4c97966] Running
	I0915 06:33:55.581479    7868 system_pods.go:61] "tiller-deploy-b48cc5f79-7wvqg" [3b39209b-916c-4391-9fcb-048af767f63b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0915 06:33:55.581541    7868 system_pods.go:74] duration metric: took 133.532592ms to wait for pod list to return data ...
	I0915 06:33:55.581588    7868 default_sa.go:34] waiting for default service account to be created ...
	I0915 06:33:55.581958    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:55.768377    7868 default_sa.go:45] found service account: "default"
	I0915 06:33:55.768486    7868 default_sa.go:55] duration metric: took 186.85261ms for default service account to be created ...
	I0915 06:33:55.768558    7868 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 06:33:55.779477    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:55.824574    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:55.978078    7868 system_pods.go:86] 18 kube-system pods found
	I0915 06:33:55.978190    7868 system_pods.go:89] "coredns-7c65d6cfc9-ql82t" [ff9e496f-0a2f-490c-ae6c-e971c7156288] Running
	I0915 06:33:55.978229    7868 system_pods.go:89] "csi-hostpath-attacher-0" [c27ed90d-32a8-47ac-9fcb-3b49c1506e1a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0915 06:33:55.978263    7868 system_pods.go:89] "csi-hostpath-resizer-0" [15ac3a9b-35ba-4a2b-9a47-513729bad607] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0915 06:33:55.978322    7868 system_pods.go:89] "csi-hostpathplugin-zwlrm" [9669dbb6-735d-434b-bcea-d4c108a96655] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0915 06:33:55.978350    7868 system_pods.go:89] "etcd-addons-353302" [ce402ffb-7500-403b-8d50-dbec44aabdcd] Running
	I0915 06:33:55.978375    7868 system_pods.go:89] "kube-apiserver-addons-353302" [7e57e85d-e0dd-4a4b-b504-43f6a03d8993] Running
	I0915 06:33:55.978415    7868 system_pods.go:89] "kube-controller-manager-addons-353302" [f60a324c-5fdb-405d-b5c7-c47c9c427bd5] Running
	I0915 06:33:55.978445    7868 system_pods.go:89] "kube-ingress-dns-minikube" [7e10931d-12c7-4b17-a00a-10e7cf170b75] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0915 06:33:55.978472    7868 system_pods.go:89] "kube-proxy-skpck" [e9d9c13d-232e-4a9c-8b60-a8189b808aac] Running
	I0915 06:33:55.978497    7868 system_pods.go:89] "kube-scheduler-addons-353302" [9373a0b8-63cc-4b79-9194-d3bc6ce23acf] Running
	I0915 06:33:55.978549    7868 system_pods.go:89] "metrics-server-84c5f94fbc-tl5pc" [60336706-ef3c-4e47-b8a8-853c524e5125] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 06:33:55.978576    7868 system_pods.go:89] "nvidia-device-plugin-daemonset-cqvk8" [1b48cd7f-dab4-4c09-ac78-8c9ed2c3e699] Running
	I0915 06:33:55.978601    7868 system_pods.go:89] "registry-66c9cd494c-tgjvk" [e2cd5872-f5e5-4446-9681-3487f553eae7] Running
	I0915 06:33:55.978645    7868 system_pods.go:89] "registry-proxy-ftsrm" [f49b325f-086e-4d70-93ec-6ecea97709a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0915 06:33:55.978675    7868 system_pods.go:89] "snapshot-controller-56fcc65765-mszv6" [34e7195b-64c5-4682-b5ea-a4e6b3a90ec8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:33:55.978713    7868 system_pods.go:89] "snapshot-controller-56fcc65765-rbtfl" [366a05fe-37d8-4d3b-8c90-95a4d44d002f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:33:55.978756    7868 system_pods.go:89] "storage-provisioner" [3a45373b-2939-482c-a018-dbd8f4c97966] Running
	I0915 06:33:55.978785    7868 system_pods.go:89] "tiller-deploy-b48cc5f79-7wvqg" [3b39209b-916c-4391-9fcb-048af767f63b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0915 06:33:55.978815    7868 system_pods.go:126] duration metric: took 210.216342ms to wait for k8s-apps to be running ...
	I0915 06:33:55.978848    7868 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 06:33:55.978975    7868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 06:33:56.000191    7868 system_svc.go:56] duration metric: took 21.331169ms WaitForService to wait for kubelet
	I0915 06:33:56.000314    7868 kubeadm.go:582] duration metric: took 46.100280339s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:33:56.000371    7868 node_conditions.go:102] verifying NodePressure condition ...
	I0915 06:33:56.080983    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:56.168493    7868 node_conditions.go:122] node storage ephemeral capacity is 119475748Ki
	I0915 06:33:56.168591    7868 node_conditions.go:123] node cpu capacity is 2
	I0915 06:33:56.168647    7868 node_conditions.go:105] duration metric: took 168.219562ms to run NodePressure ...
	I0915 06:33:56.168688    7868 start.go:241] waiting for startup goroutines ...
	I0915 06:33:56.311870    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:56.331834    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:56.582753    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:56.797403    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:56.852013    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:57.103597    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:57.283233    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:57.377486    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:57.598223    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:57.781145    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:57.827529    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:58.315763    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:58.360039    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:58.455078    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:58.595246    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:58.784504    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:58.829842    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:59.096145    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:59.290622    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:59.331228    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:59.587012    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:59.812719    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:59.838566    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:00.095702    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:00.282431    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:00.331485    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:00.597478    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:00.787250    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:00.824851    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:01.087412    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:01.281454    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:01.326080    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:01.597203    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:01.781687    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:01.826810    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:02.081825    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:02.280978    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:02.326362    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:02.582861    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:02.781694    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:02.836389    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:03.082790    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:03.289071    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:03.326975    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:03.583336    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:03.782698    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:03.826272    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:04.135110    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:04.284970    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:04.332992    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:04.603547    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:04.787534    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:04.865705    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:05.643230    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:05.644199    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:05.646488    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:05.737058    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:05.813094    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:05.830871    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:06.080976    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:06.429480    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:06.431951    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:06.632063    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:06.778775    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:06.824374    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:07.169531    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:07.284841    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:07.333592    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:07.595226    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:07.798765    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:07.895827    7868 kapi.go:107] duration metric: took 46.576236765s to wait for kubernetes.io/minikube-addons=registry ...
	I0915 06:34:08.080676    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:08.282157    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:08.592733    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:08.830741    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:09.086151    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:09.279203    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:09.581389    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:09.805246    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:10.079560    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:10.285753    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:10.582769    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:10.783246    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:11.084066    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:11.279652    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:11.581203    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:11.801228    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:12.152534    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:12.299925    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:12.583441    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:12.786357    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:13.080234    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:13.281182    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:13.585607    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:13.787180    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:14.086687    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:14.280217    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:14.585606    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:14.779605    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:15.084580    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:15.280294    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:15.580080    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:15.978819    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:16.341177    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:16.343055    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:16.590906    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:16.780936    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:17.103634    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:17.296185    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:17.601699    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:17.787054    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:18.100351    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:18.287025    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:18.623145    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:18.802261    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:19.087272    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:19.285000    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:19.582840    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:19.785756    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:20.080236    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:20.371775    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:20.581028    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:20.782064    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:21.085685    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:21.284810    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:21.598262    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:21.791146    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:22.091254    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:22.280855    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:22.604811    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:22.789785    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:23.092024    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:23.296599    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:23.583463    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:23.783561    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:24.091031    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:24.289076    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:24.580697    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:24.788739    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:25.095017    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:25.290180    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:25.585101    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:25.782043    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:26.094507    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:26.390819    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:26.605481    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:26.894490    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:27.092013    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:27.281442    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:27.600592    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:27.793116    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:28.085740    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:28.510521    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:28.720075    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:28.864353    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:29.079542    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:29.283563    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:29.584243    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:29.793043    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:30.116257    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:30.293570    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:30.610649    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:30.790955    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:31.080462    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:31.282919    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:31.582049    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:31.786651    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:32.090732    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:32.309780    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:32.583722    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:32.783823    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:33.080837    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:33.292019    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:33.604346    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:33.781085    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:34.094017    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:34.318396    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:34.589166    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:34.780075    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:35.085351    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:35.282580    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:35.612220    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:35.782849    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:36.082878    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:36.279710    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:36.648824    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:36.781925    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:37.082788    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:37.281017    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:37.583785    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:37.788209    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:38.081682    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:38.285513    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:38.733025    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:38.833064    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:39.133770    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:39.343895    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:39.585231    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:39.789623    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:40.087849    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:40.290619    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:40.603419    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:40.788792    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:41.191100    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:41.281811    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:41.608497    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:41.795043    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:42.108639    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:42.280083    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:42.583074    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:42.791552    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:43.105203    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:43.488518    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:43.578959    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:43.938633    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:44.092887    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:44.289966    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:44.585003    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:44.782458    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:45.083030    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:45.280801    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:45.603459    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:45.799274    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:46.105345    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:46.281672    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:46.580829    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:46.790474    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:47.083522    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:47.283491    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:47.591227    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:48.648920    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:48.663759    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:48.951807    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:48.976890    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:49.028645    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:49.114227    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:49.354275    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:49.643267    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:49.904776    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:50.173859    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:50.313975    7868 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:50.688208    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:50.788260    7868 kapi.go:107] duration metric: took 1m15.015505059s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0915 06:34:51.099349    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:51.597070    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:52.083171    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:52.622648    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:53.126748    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:53.588086    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:54.091031    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:54.589141    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:55.113331    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:55.582689    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:56.096098    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:56.583126    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:57.083565    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:57.582428    7868 kapi.go:107] duration metric: took 1m17.00901024s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0915 06:35:07.315861    7868 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0915 06:35:07.315890    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:35:07.817644    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:35:08.316578    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:35:08.838907    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:35:09.319818    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:35:09.824184    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:35:10.404612    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:35:10.816554    7868 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:35:11.316691    7868 kapi.go:107] duration metric: took 1m26.505459506s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0915 06:35:11.320330    7868 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-353302 cluster.
	I0915 06:35:11.322983    7868 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0915 06:35:11.325224    7868 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0915 06:35:11.327805    7868 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, storage-provisioner-rancher, ingress-dns, metrics-server, helm-tiller, inspektor-gadget, volcano, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0915 06:35:11.330416    7868 addons.go:510] duration metric: took 2m1.429677722s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner storage-provisioner-rancher ingress-dns metrics-server helm-tiller inspektor-gadget volcano yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0915 06:35:11.330477    7868 start.go:246] waiting for cluster config update ...
	I0915 06:35:11.330514    7868 start.go:255] writing updated cluster config ...
	I0915 06:35:11.330900    7868 ssh_runner.go:195] Run: rm -f paused
	I0915 06:35:11.816381    7868 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 06:35:11.826045    7868 out.go:177] * Done! kubectl is now configured to use "addons-353302" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 15 06:44:46 addons-353302 dockerd[1157]: time="2024-09-15T06:44:46.870236162Z" level=info msg="ignoring event" container=a136d5216f462415ae27bb778708a9fd91336dc625ee5f0c42f12af662b14f2f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:44:47 addons-353302 dockerd[1157]: time="2024-09-15T06:44:47.131807146Z" level=info msg="ignoring event" container=b74213e1df782d8767a6da63ae8cefff6d7d61746255a2449f73d401391c01e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:44:47 addons-353302 dockerd[1157]: time="2024-09-15T06:44:47.136033536Z" level=info msg="ignoring event" container=e150126289c3a5aeb74f007eee4287fc03537e12f0bc82beaffda4b7fc566a9d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:44:51 addons-353302 dockerd[1157]: time="2024-09-15T06:44:51.960649181Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 15 06:44:51 addons-353302 dockerd[1157]: time="2024-09-15T06:44:51.964706271Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 15 06:44:53 addons-353302 dockerd[1157]: time="2024-09-15T06:44:53.790994291Z" level=info msg="ignoring event" container=c0791246e88e2fcce50ae499b013051c16a763a81eb1fd5c162a6bbf2ddc0f9a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:44:54 addons-353302 dockerd[1157]: time="2024-09-15T06:44:54.002124042Z" level=info msg="ignoring event" container=1d6a94cc1d63796a5f59dba168e8a0ad80b225a00339feb448979985627e4b9c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:44:54 addons-353302 cri-dockerd[1415]: time="2024-09-15T06:44:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/97540a0b0b0ed85046777c70b38588fab39b46e91e550ebe669ce33d67dfb2b0/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east1-c.c.p79a29526b6c1e63c-tp.internal c.p79a29526b6c1e63c-tp.internal google.internal options ndots:5]"
	Sep 15 06:44:55 addons-353302 dockerd[1157]: time="2024-09-15T06:44:55.125550103Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 15 06:44:55 addons-353302 cri-dockerd[1415]: time="2024-09-15T06:44:55Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 15 06:44:56 addons-353302 dockerd[1157]: time="2024-09-15T06:44:56.075388367Z" level=info msg="ignoring event" container=80999484ba84e87b6d2d5da6daff3dc4da1522f0ad90bbf532ec6188c7b1da8b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:44:57 addons-353302 dockerd[1157]: time="2024-09-15T06:44:57.354404501Z" level=info msg="ignoring event" container=97540a0b0b0ed85046777c70b38588fab39b46e91e550ebe669ce33d67dfb2b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:44:59 addons-353302 cri-dockerd[1415]: time="2024-09-15T06:44:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ff379a3807c657c07239554a431fb1ef319121eb5c3841690ca87cde69152fe2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east1-c.c.p79a29526b6c1e63c-tp.internal c.p79a29526b6c1e63c-tp.internal google.internal options ndots:5]"
	Sep 15 06:45:00 addons-353302 cri-dockerd[1415]: time="2024-09-15T06:45:00Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Sep 15 06:45:01 addons-353302 dockerd[1157]: time="2024-09-15T06:45:01.015965834Z" level=info msg="ignoring event" container=49a3a7caddff117cab37605de3def4573408a5e9fa18158297221a9b25683ce2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:45:02 addons-353302 dockerd[1157]: time="2024-09-15T06:45:02.472144405Z" level=info msg="ignoring event" container=ff379a3807c657c07239554a431fb1ef319121eb5c3841690ca87cde69152fe2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:45:04 addons-353302 cri-dockerd[1415]: time="2024-09-15T06:45:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9965690483fbde88cfbde3d8f617b6dd066d47da084a9531297a0c590b18d5e9/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east1-c.c.p79a29526b6c1e63c-tp.internal c.p79a29526b6c1e63c-tp.internal google.internal options ndots:5]"
	Sep 15 06:45:05 addons-353302 dockerd[1157]: time="2024-09-15T06:45:05.249310378Z" level=info msg="ignoring event" container=d7024dc7dffe9a9bb57c47f2e8bfe917ad31c55303a8db6b71e400ffa9ad1f6d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:45:06 addons-353302 dockerd[1157]: time="2024-09-15T06:45:06.620434856Z" level=info msg="ignoring event" container=9965690483fbde88cfbde3d8f617b6dd066d47da084a9531297a0c590b18d5e9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:45:14 addons-353302 dockerd[1157]: time="2024-09-15T06:45:14.412504270Z" level=info msg="ignoring event" container=b9200e00e97be5b2322fe909eeaf25dc0292fc45483ef75aa332056e8ba8c04e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:45:15 addons-353302 dockerd[1157]: time="2024-09-15T06:45:15.550553040Z" level=info msg="ignoring event" container=760945e0aec9e8579041b318a39409818fccc842924d525e41811a2061a8346f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:45:15 addons-353302 dockerd[1157]: time="2024-09-15T06:45:15.770191333Z" level=info msg="ignoring event" container=a7d9465e821ccd61f10834a0e4c8e540c9153b98a680c936c90a842c3198c131 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:45:15 addons-353302 cri-dockerd[1415]: time="2024-09-15T06:45:15Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-66c9cd494c-tgjvk_kube-system\": unexpected command output nsenter: cannot open /proc/3247/ns/net: No such file or directory\n with error: exit status 1"
	Sep 15 06:45:15 addons-353302 dockerd[1157]: time="2024-09-15T06:45:15.963678527Z" level=info msg="ignoring event" container=f52206cebdbd5704fa32c39f9f8f582462b3087e383d02ada23ed72fd70a64be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:45:16 addons-353302 dockerd[1157]: time="2024-09-15T06:45:16.244780992Z" level=info msg="ignoring event" container=b76dc9b14f05a1d36d23d06d7b02d75d97b43f8bc894c9cebc4b439b7d46c10f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d7024dc7dffe9       a416a98b71e22                                                                                                                12 seconds ago      Exited              helper-pod                0                   9965690483fbd       helper-pod-delete-pvc-59f8d0a8-be52-4426-9cd8-003f857fbb40
	b87d61d36f605       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            4 minutes ago       Exited              gadget                    6                   aaba542befa8c       gadget-dxlhq
	8f252a34afa73       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago      Running             gcp-auth                  0                   4b5d6e3a5e0dc       gcp-auth-89d5ffd79-r2ndn
	9f39a5c1c4d41       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             10 minutes ago      Running             controller                0                   fa9b2ae1fc3e7       ingress-nginx-controller-bc57996ff-vdksz
	290e3070f37a0       ce263a8653f9c                                                                                                                10 minutes ago      Exited              patch                     1                   0c1ef06a75f7b       ingress-nginx-admission-patch-72wrn
	f5d0f0a9c5347       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   10 minutes ago      Exited              create                    0                   db9e61320c051       ingress-nginx-admission-create-l5qg9
	d5804274928f9       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       11 minutes ago      Running             local-path-provisioner    0                   5075c531a486d       local-path-provisioner-86d989889c-kj79f
	e5f042835f61f       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                  11 minutes ago      Running             tiller                    0                   a9848380dd72d       tiller-deploy-b48cc5f79-7wvqg
	68c3848b35b89       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        11 minutes ago      Running             metrics-server            0                   9b47a8531661c       metrics-server-84c5f94fbc-tl5pc
	fb91b151b3227       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             11 minutes ago      Running             minikube-ingress-dns      0                   a1c2f858b6ec3       kube-ingress-dns-minikube
	676744b79ff67       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               11 minutes ago      Running             cloud-spanner-emulator    0                   df34f34c66384       cloud-spanner-emulator-769b77f747-p42bq
	7003db7b7db77       6e38f40d628db                                                                                                                11 minutes ago      Running             storage-provisioner       0                   2b6273433fe98       storage-provisioner
	157dd5a368ee7       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                   0                   e50f1bca80479       coredns-7c65d6cfc9-ql82t
	12b16121c4abf       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                0                   3e2fabd8cf62e       kube-proxy-skpck
	05140842b2d25       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   05031bce69ad4       etcd-addons-353302
	184c49a70becd       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   4b29be09868dc       kube-controller-manager-addons-353302
	c3ad1f71fd03e       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver            0                   354b60807bef2       kube-apiserver-addons-353302
	b6ef915274cd9       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler            0                   ef24e90cf2f2f       kube-scheduler-addons-353302
	
	
	==> controller_ingress [9f39a5c1c4d4] <==
	NGINX Ingress controller
	  Release:       v1.11.2
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	I0915 06:34:49.629901       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/amd64"
	I0915 06:34:51.812871       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0915 06:34:51.850632       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0915 06:34:51.868042       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0915 06:34:51.882808       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"8751bb5d-7018-4519-a83b-bcc45d8865d1", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0915 06:34:51.888931       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"0fc53953-3607-4c5e-9a4b-2a3048391224", APIVersion:"v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0915 06:34:51.889368       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"07f662f6-ce0a-4a1b-a1cd-1e8ff188ddce", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0915 06:34:53.073764       7 nginx.go:317] "Starting NGINX process"
	I0915 06:34:53.074515       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0915 06:34:53.090634       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0915 06:34:53.105332       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0915 06:34:53.179860       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0915 06:34:53.182814       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-vdksz"
	I0915 06:34:53.224126       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-vdksz" node="addons-353302"
	I0915 06:34:53.451425       7 controller.go:213] "Backend successfully reloaded"
	I0915 06:34:53.451718       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0915 06:34:53.452205       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-vdksz", UID:"b296057f-bbf6-4661-9781-62cf2e569f14", APIVersion:"v1", ResourceVersion:"1276", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [157dd5a368ee] <==
	[INFO] 10.244.0.7:33141 - 1569 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000077754s
	[INFO] 10.244.0.7:48011 - 30968 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00010105s
	[INFO] 10.244.0.7:48011 - 14836 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000196921s
	[INFO] 10.244.0.7:49558 - 8540 "AAAA IN registry.kube-system.svc.cluster.local.us-east1-c.c.p79a29526b6c1e63c-tp.internal. udp 99 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000130385s
	[INFO] 10.244.0.7:49558 - 19800 "A IN registry.kube-system.svc.cluster.local.us-east1-c.c.p79a29526b6c1e63c-tp.internal. udp 99 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000068254s
	[INFO] 10.244.0.7:58841 - 57471 "AAAA IN registry.kube-system.svc.cluster.local.c.p79a29526b6c1e63c-tp.internal. udp 88 false 512" NXDOMAIN qr,aa,rd,ra 193 0.000086766s
	[INFO] 10.244.0.7:58841 - 53627 "A IN registry.kube-system.svc.cluster.local.c.p79a29526b6c1e63c-tp.internal. udp 88 false 512" NXDOMAIN qr,aa,rd,ra 193 0.000609398s
	[INFO] 10.244.0.7:50956 - 35845 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000118192s
	[INFO] 10.244.0.7:50956 - 29702 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000168791s
	[INFO] 10.244.0.7:35673 - 19855 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000340225s
	[INFO] 10.244.0.7:35673 - 10890 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077582s
	[INFO] 10.244.0.26:45208 - 12731 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00087275s
	[INFO] 10.244.0.26:59714 - 47108 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000231576s
	[INFO] 10.244.0.26:38547 - 62871 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000282617s
	[INFO] 10.244.0.26:43055 - 36728 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000481226s
	[INFO] 10.244.0.26:52772 - 2072 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000195416s
	[INFO] 10.244.0.26:38298 - 44980 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000155873s
	[INFO] 10.244.0.26:37123 - 38813 "A IN storage.googleapis.com.us-east1-c.c.p79a29526b6c1e63c-tp.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 190 0.00427225s
	[INFO] 10.244.0.26:59104 - 11295 "AAAA IN storage.googleapis.com.us-east1-c.c.p79a29526b6c1e63c-tp.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 190 0.004296955s
	[INFO] 10.244.0.26:60512 - 37913 "A IN storage.googleapis.com.c.p79a29526b6c1e63c-tp.internal. udp 83 false 1232" NXDOMAIN qr,rd,ra 177 0.00406128s
	[INFO] 10.244.0.26:36750 - 61584 "AAAA IN storage.googleapis.com.c.p79a29526b6c1e63c-tp.internal. udp 83 false 1232" NXDOMAIN qr,rd,ra 177 0.006599702s
	[INFO] 10.244.0.26:41446 - 41636 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004637411s
	[INFO] 10.244.0.26:34997 - 39604 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005970722s
	[INFO] 10.244.0.26:34368 - 13025 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00321647s
	[INFO] 10.244.0.26:36228 - 17185 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003006662s
	
	
	==> describe nodes <==
	Name:               addons-353302
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-353302
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=addons-353302
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T06_33_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-353302
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 06:33:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-353302
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 06:45:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 06:45:09 +0000   Sun, 15 Sep 2024 06:33:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 06:45:09 +0000   Sun, 15 Sep 2024 06:33:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 06:45:09 +0000   Sun, 15 Sep 2024 06:33:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 06:45:09 +0000   Sun, 15 Sep 2024 06:33:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-353302
	Capacity:
	  cpu:                2
	  ephemeral-storage:  119475748Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             8141780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  119475748Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             8141780Ki
	  pods:               110
	System Info:
	  Machine ID:                 91357bc2d2854b04839b68f2c4a077cf
	  System UUID:                8d2255f8-0738-4ad9-8a4b-4f2fe383c3ea
	  Boot ID:                    8ee743c2-5ca4-4cc2-a942-cb483d0e7219
	  Kernel Version:             6.1.100+
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m20s
	  default                     cloud-spanner-emulator-769b77f747-p42bq     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-dxlhq                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-r2ndn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-vdksz    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-ql82t                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-353302                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-353302                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-353302       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-skpck                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-353302                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-tl5pc             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 tiller-deploy-b48cc5f79-7wvqg               0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-kj79f     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             460Mi (5%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 11m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node addons-353302 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node addons-353302 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node addons-353302 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m   node-controller  Node addons-353302 event: Registered Node addons-353302 in Controller
	
	
	==> dmesg <==
	[  +0.000030] ll header: 00000000: ff ff ff ff ff ff ba 2f 77 2e 2a bb 08 06
	[  +1.618483] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a 12 db 85 a9 16 08 06
	[  +0.026114] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e b8 8a 91 94 dc 08 06
	[  +2.357494] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 11 e9 fd 35 f8 08 06
	[  +7.425463] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ca 91 37 87 3b 13 08 06
	[  +9.146923] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 92 06 28 04 f1 ca 08 06
	[  +1.207614] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 b2 1a 89 c7 92 08 06
	[  +0.309521] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 7b 41 35 f8 24 08 06
	[  +0.799887] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 66 86 b0 bd 69 9a 08 06
	[  +0.464946] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 66 fd 36 54 f0 16 08 06
	[  +0.946026] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 b9 b4 f0 7c 36 08 06
	[Sep15 06:35] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff da e2 58 e9 9c c9 08 06
	[  +0.001144] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee be c2 bd 03 5e 08 06
	
	
	==> etcd [05140842b2d2] <==
	{"level":"info","ts":"2024-09-15T06:35:39.837445Z","caller":"traceutil/trace.go:171","msg":"trace[954669954] transaction","detail":"{read_only:false; response_revision:1483; number_of_response:1; }","duration":"235.880022ms","start":"2024-09-15T06:35:39.601528Z","end":"2024-09-15T06:35:39.837408Z","steps":["trace[954669954] 'process raft request'  (duration: 235.02735ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:42:59.698310Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1853}
	{"level":"info","ts":"2024-09-15T06:43:00.045350Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1853,"took":"345.659939ms","hash":3853528681,"current-db-size-bytes":8970240,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":4993024,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-15T06:43:00.045414Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3853528681,"revision":1853,"compact-revision":-1}
	{"level":"info","ts":"2024-09-15T06:44:39.439265Z","caller":"traceutil/trace.go:171","msg":"trace[184615698] transaction","detail":"{read_only:false; response_revision:2581; number_of_response:1; }","duration":"205.956238ms","start":"2024-09-15T06:44:39.233216Z","end":"2024-09-15T06:44:39.439172Z","steps":["trace[184615698] 'process raft request'  (duration: 66.530175ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:44:39.441842Z","caller":"traceutil/trace.go:171","msg":"trace[979338484] linearizableReadLoop","detail":"{readStateIndex:2757; appliedIndex:2757; }","duration":"195.698203ms","start":"2024-09-15T06:44:39.245846Z","end":"2024-09-15T06:44:39.441545Z","steps":["trace[979338484] 'read index received'  (duration: 195.691226ms)","trace[979338484] 'applied index is now lower than readState.Index'  (duration: 5.655µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T06:44:39.631089Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"385.159454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/csi-hostpathplugin-zwlrm\" ","response":"range_response_count:1 size:13887"}
	{"level":"warn","ts":"2024-09-15T06:44:39.645760Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.68299ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:44:39.645935Z","caller":"traceutil/trace.go:171","msg":"trace[1978664972] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2581; }","duration":"169.875064ms","start":"2024-09-15T06:44:39.476043Z","end":"2024-09-15T06:44:39.645918Z","steps":["trace[1978664972] 'range keys from in-memory index tree'  (duration: 168.105272ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:44:39.656598Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.015026ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:44:39.678590Z","caller":"traceutil/trace.go:171","msg":"trace[378164954] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2581; }","duration":"194.05264ms","start":"2024-09-15T06:44:39.484516Z","end":"2024-09-15T06:44:39.678569Z","steps":["trace[378164954] 'range keys from in-memory index tree'  (duration: 171.995039ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:44:39.656680Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.584652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/csi-hostpath-resizer\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:44:39.678850Z","caller":"traceutil/trace.go:171","msg":"trace[2026903955] range","detail":"{range_begin:/registry/services/endpoints/kube-system/csi-hostpath-resizer; range_end:; response_count:0; response_revision:2581; }","duration":"193.7095ms","start":"2024-09-15T06:44:39.485085Z","end":"2024-09-15T06:44:39.678794Z","steps":["trace[2026903955] 'range keys from in-memory index tree'  (duration: 169.204252ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:44:39.657741Z","caller":"traceutil/trace.go:171","msg":"trace[161839898] range","detail":"{range_begin:/registry/pods/kube-system/csi-hostpathplugin-zwlrm; range_end:; response_count:1; response_revision:2581; }","duration":"385.324607ms","start":"2024-09-15T06:44:39.245838Z","end":"2024-09-15T06:44:39.631163Z","steps":["trace[161839898] 'agreement among raft nodes before linearized reading'  (duration: 199.680401ms)","trace[161839898] 'range keys from in-memory index tree'  (duration: 184.944606ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T06:44:39.679663Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-15T06:44:39.245610Z","time spent":"433.960397ms","remote":"127.0.0.1:60760","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":13911,"request content":"key:\"/registry/pods/kube-system/csi-hostpathplugin-zwlrm\" "}
	{"level":"warn","ts":"2024-09-15T06:44:39.677959Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.35242ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128031906306849647 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/csi-hostpathplugin-zwlrm.17f557e666132797\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/csi-hostpathplugin-zwlrm.17f557e666132797\" value_size:708 lease:8128031906306849243 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-15T06:44:39.744615Z","caller":"traceutil/trace.go:171","msg":"trace[391639896] transaction","detail":"{read_only:false; response_revision:2582; number_of_response:1; }","duration":"259.081868ms","start":"2024-09-15T06:44:39.485513Z","end":"2024-09-15T06:44:39.744595Z","steps":["trace[391639896] 'compare'  (duration: 169.262449ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:44:39.744480Z","caller":"traceutil/trace.go:171","msg":"trace[85188592] linearizableReadLoop","detail":"{readStateIndex:2758; appliedIndex:2757; }","duration":"257.098871ms","start":"2024-09-15T06:44:39.487356Z","end":"2024-09-15T06:44:39.744455Z","steps":["trace[85188592] 'read index received'  (duration: 142.486µs)","trace[85188592] 'applied index is now lower than readState.Index'  (duration: 256.953046ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T06:44:39.776275Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"285.336899ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-09-15T06:44:39.777450Z","caller":"traceutil/trace.go:171","msg":"trace[330317599] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2582; }","duration":"285.539843ms","start":"2024-09-15T06:44:39.490874Z","end":"2024-09-15T06:44:39.776414Z","steps":["trace[330317599] 'agreement among raft nodes before linearized reading'  (duration: 255.341029ms)","trace[330317599] 'range keys from in-memory index tree'  (duration: 29.731667ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T06:44:39.777760Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"292.398781ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/kube-system/csi-hostpath-resizer-m59kl\" ","response":"range_response_count:1 size:1076"}
	{"level":"info","ts":"2024-09-15T06:44:39.777796Z","caller":"traceutil/trace.go:171","msg":"trace[1210177505] range","detail":"{range_begin:/registry/endpointslices/kube-system/csi-hostpath-resizer-m59kl; range_end:; response_count:1; response_revision:2582; }","duration":"292.441784ms","start":"2024-09-15T06:44:39.485341Z","end":"2024-09-15T06:44:39.777783Z","steps":["trace[1210177505] 'agreement among raft nodes before linearized reading'  (duration: 260.404442ms)","trace[1210177505] 'range keys from in-memory index tree'  (duration: 31.921207ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T06:44:39.801807Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"313.015946ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets/kube-system/csi-hostpath-resizer\" ","response":"range_response_count:1 size:3690"}
	{"level":"info","ts":"2024-09-15T06:44:39.820437Z","caller":"traceutil/trace.go:171","msg":"trace[1291044770] range","detail":"{range_begin:/registry/statefulsets/kube-system/csi-hostpath-resizer; range_end:; response_count:1; response_revision:2582; }","duration":"331.665022ms","start":"2024-09-15T06:44:39.488750Z","end":"2024-09-15T06:44:39.820415Z","steps":["trace[1291044770] 'agreement among raft nodes before linearized reading'  (duration: 257.422934ms)","trace[1291044770] 'range keys from in-memory index tree'  (duration: 18.324823ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T06:44:39.834849Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-15T06:44:39.488709Z","time spent":"346.102162ms","remote":"127.0.0.1:32804","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":3714,"request content":"key:\"/registry/statefulsets/kube-system/csi-hostpath-resizer\" "}
	
	
	==> gcp-auth [8f252a34afa7] <==
	2024/09/15 06:35:10 GCP Auth Webhook started!
	2024/09/15 06:35:30 Ready to marshal response ...
	2024/09/15 06:35:30 Ready to write response ...
	2024/09/15 06:35:31 Ready to marshal response ...
	2024/09/15 06:35:31 Ready to write response ...
	2024/09/15 06:35:57 Ready to marshal response ...
	2024/09/15 06:35:57 Ready to write response ...
	2024/09/15 06:35:57 Ready to marshal response ...
	2024/09/15 06:35:57 Ready to write response ...
	2024/09/15 06:35:57 Ready to marshal response ...
	2024/09/15 06:35:57 Ready to write response ...
	2024/09/15 06:44:14 Ready to marshal response ...
	2024/09/15 06:44:14 Ready to write response ...
	2024/09/15 06:44:16 Ready to marshal response ...
	2024/09/15 06:44:16 Ready to write response ...
	2024/09/15 06:44:29 Ready to marshal response ...
	2024/09/15 06:44:29 Ready to write response ...
	2024/09/15 06:44:54 Ready to marshal response ...
	2024/09/15 06:44:54 Ready to write response ...
	2024/09/15 06:44:54 Ready to marshal response ...
	2024/09/15 06:44:54 Ready to write response ...
	2024/09/15 06:45:04 Ready to marshal response ...
	2024/09/15 06:45:04 Ready to write response ...
	
	
	==> kernel <==
	 06:45:17 up 49 min,  0 users,  load average: 0.47, 0.98, 1.12
	Linux addons-353302 6.1.100+ #1 SMP PREEMPT_DYNAMIC Sat Aug 17 14:12:26 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [c3ad1f71fd03] <==
	I0915 06:35:49.383119       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0915 06:35:49.476623       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0915 06:35:49.681588       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0915 06:35:49.682803       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0915 06:35:49.951148       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0915 06:35:49.975115       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0915 06:35:50.477709       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0915 06:35:50.780305       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0915 06:44:25.063463       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0915 06:44:46.486524       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:44:46.488044       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:44:46.514599       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:44:46.514674       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:44:46.544220       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:44:46.544571       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:44:46.551150       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:44:46.551238       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:44:46.599387       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:44:46.599518       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0915 06:44:47.544828       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0915 06:44:47.599786       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0915 06:44:47.696403       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0915 06:45:05.300762       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:45:05.312774       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:45:05.321446       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [184c49a70bec] <==
	W0915 06:44:56.008429       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:44:56.008520       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:44:56.784402       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:44:56.784532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:44:57.510515       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:44:57.510598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:45:02.789658       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:45:02.789803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:45:05.234060       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="6.833µs"
	W0915 06:45:06.506164       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:45:06.506224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:45:08.243262       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:45:08.243341       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:45:10.001537       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-353302"
	W0915 06:45:10.359864       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:45:10.359920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:45:12.337236       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:45:12.337405       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:45:12.377450       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:45:12.377535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:45:12.538104       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0915 06:45:12.538196       1 shared_informer.go:320] Caches are synced for resource quota
	I0915 06:45:12.909785       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0915 06:45:12.909956       1 shared_informer.go:320] Caches are synced for garbage collector
	I0915 06:45:15.413746       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="6.602µs"
	
	
	==> kube-proxy [12b16121c4ab] <==
	I0915 06:33:18.703072       1 server_linux.go:66] "Using iptables proxy"
	I0915 06:33:21.404041       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0915 06:33:21.405750       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 06:33:22.766397       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0915 06:33:22.768170       1 server_linux.go:169] "Using iptables Proxier"
	I0915 06:33:22.800231       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 06:33:22.801406       1 server.go:483] "Version info" version="v1.31.1"
	I0915 06:33:22.801466       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:33:22.837608       1 config.go:199] "Starting service config controller"
	I0915 06:33:22.837698       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 06:33:22.837798       1 config.go:105] "Starting endpoint slice config controller"
	I0915 06:33:22.837817       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 06:33:22.838912       1 config.go:328] "Starting node config controller"
	I0915 06:33:22.838935       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 06:33:23.050420       1 shared_informer.go:320] Caches are synced for node config
	I0915 06:33:23.055923       1 shared_informer.go:320] Caches are synced for service config
	I0915 06:33:23.056063       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b6ef915274cd] <==
	W0915 06:33:02.289888       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 06:33:02.294811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:33:02.290015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 06:33:02.295636       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:33:02.290127       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 06:33:02.298270       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:33:02.290205       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 06:33:02.298806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:33:02.299050       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 06:33:02.299329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:33:03.132842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 06:33:03.132906       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:33:03.196642       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 06:33:03.196704       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:33:03.290727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 06:33:03.291113       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0915 06:33:03.356606       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 06:33:03.356666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:33:03.407648       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 06:33:03.408079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:33:03.484616       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 06:33:03.484771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:33:03.716274       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 06:33:03.716342       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0915 06:33:05.968789       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 15 06:45:06 addons-353302 kubelet[2180]: I0915 06:45:06.849120    2180 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gdk2k\" (UniqueName: \"kubernetes.io/projected/0c10725c-b867-4324-8c1c-8ee7698d76f2-kube-api-access-gdk2k\") on node \"addons-353302\" DevicePath \"\""
	Sep 15 06:45:06 addons-353302 kubelet[2180]: I0915 06:45:06.849149    2180 reconciler_common.go:288] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/0c10725c-b867-4324-8c1c-8ee7698d76f2-data\") on node \"addons-353302\" DevicePath \"\""
	Sep 15 06:45:06 addons-353302 kubelet[2180]: I0915 06:45:06.849166    2180 reconciler_common.go:288] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/0c10725c-b867-4324-8c1c-8ee7698d76f2-script\") on node \"addons-353302\" DevicePath \"\""
	Sep 15 06:45:07 addons-353302 kubelet[2180]: I0915 06:45:07.518337    2180 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9965690483fbde88cfbde3d8f617b6dd066d47da084a9531297a0c590b18d5e9"
	Sep 15 06:45:10 addons-353302 kubelet[2180]: I0915 06:45:10.917022    2180 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c10725c-b867-4324-8c1c-8ee7698d76f2" path="/var/lib/kubelet/pods/0c10725c-b867-4324-8c1c-8ee7698d76f2/volumes"
	Sep 15 06:45:11 addons-353302 kubelet[2180]: I0915 06:45:11.901317    2180 scope.go:117] "RemoveContainer" containerID="b87d61d36f605207148e7aec862817b882969e6d5814de703fbf61ac46bc8d6e"
	Sep 15 06:45:11 addons-353302 kubelet[2180]: E0915 06:45:11.901640    2180 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-dxlhq_gadget(1aa7255c-4f2d-4fc0-8e7e-f134b3d66019)\"" pod="gadget/gadget-dxlhq" podUID="1aa7255c-4f2d-4fc0-8e7e-f134b3d66019"
	Sep 15 06:45:14 addons-353302 kubelet[2180]: I0915 06:45:14.608233    2180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljg5h\" (UniqueName: \"kubernetes.io/projected/fe68d4f8-b6c0-4145-bfa6-7d2f5bdc4603-kube-api-access-ljg5h\") pod \"fe68d4f8-b6c0-4145-bfa6-7d2f5bdc4603\" (UID: \"fe68d4f8-b6c0-4145-bfa6-7d2f5bdc4603\") "
	Sep 15 06:45:14 addons-353302 kubelet[2180]: I0915 06:45:14.608369    2180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/fe68d4f8-b6c0-4145-bfa6-7d2f5bdc4603-gcp-creds\") pod \"fe68d4f8-b6c0-4145-bfa6-7d2f5bdc4603\" (UID: \"fe68d4f8-b6c0-4145-bfa6-7d2f5bdc4603\") "
	Sep 15 06:45:14 addons-353302 kubelet[2180]: I0915 06:45:14.608532    2180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe68d4f8-b6c0-4145-bfa6-7d2f5bdc4603-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "fe68d4f8-b6c0-4145-bfa6-7d2f5bdc4603" (UID: "fe68d4f8-b6c0-4145-bfa6-7d2f5bdc4603"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 15 06:45:14 addons-353302 kubelet[2180]: I0915 06:45:14.612476    2180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe68d4f8-b6c0-4145-bfa6-7d2f5bdc4603-kube-api-access-ljg5h" (OuterVolumeSpecName: "kube-api-access-ljg5h") pod "fe68d4f8-b6c0-4145-bfa6-7d2f5bdc4603" (UID: "fe68d4f8-b6c0-4145-bfa6-7d2f5bdc4603"). InnerVolumeSpecName "kube-api-access-ljg5h". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:45:14 addons-353302 kubelet[2180]: I0915 06:45:14.709981    2180 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/fe68d4f8-b6c0-4145-bfa6-7d2f5bdc4603-gcp-creds\") on node \"addons-353302\" DevicePath \"\""
	Sep 15 06:45:14 addons-353302 kubelet[2180]: I0915 06:45:14.710050    2180 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ljg5h\" (UniqueName: \"kubernetes.io/projected/fe68d4f8-b6c0-4145-bfa6-7d2f5bdc4603-kube-api-access-ljg5h\") on node \"addons-353302\" DevicePath \"\""
	Sep 15 06:45:14 addons-353302 kubelet[2180]: I0915 06:45:14.919153    2180 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe68d4f8-b6c0-4145-bfa6-7d2f5bdc4603" path="/var/lib/kubelet/pods/fe68d4f8-b6c0-4145-bfa6-7d2f5bdc4603/volumes"
	Sep 15 06:45:16 addons-353302 kubelet[2180]: I0915 06:45:16.219049    2180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b2zt\" (UniqueName: \"kubernetes.io/projected/e2cd5872-f5e5-4446-9681-3487f553eae7-kube-api-access-5b2zt\") pod \"e2cd5872-f5e5-4446-9681-3487f553eae7\" (UID: \"e2cd5872-f5e5-4446-9681-3487f553eae7\") "
	Sep 15 06:45:16 addons-353302 kubelet[2180]: I0915 06:45:16.228099    2180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2cd5872-f5e5-4446-9681-3487f553eae7-kube-api-access-5b2zt" (OuterVolumeSpecName: "kube-api-access-5b2zt") pod "e2cd5872-f5e5-4446-9681-3487f553eae7" (UID: "e2cd5872-f5e5-4446-9681-3487f553eae7"). InnerVolumeSpecName "kube-api-access-5b2zt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:45:16 addons-353302 kubelet[2180]: I0915 06:45:16.319565    2180 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5b2zt\" (UniqueName: \"kubernetes.io/projected/e2cd5872-f5e5-4446-9681-3487f553eae7-kube-api-access-5b2zt\") on node \"addons-353302\" DevicePath \"\""
	Sep 15 06:45:16 addons-353302 kubelet[2180]: I0915 06:45:16.420393    2180 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4j6p\" (UniqueName: \"kubernetes.io/projected/f49b325f-086e-4d70-93ec-6ecea97709a2-kube-api-access-k4j6p\") pod \"f49b325f-086e-4d70-93ec-6ecea97709a2\" (UID: \"f49b325f-086e-4d70-93ec-6ecea97709a2\") "
	Sep 15 06:45:16 addons-353302 kubelet[2180]: I0915 06:45:16.425585    2180 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f49b325f-086e-4d70-93ec-6ecea97709a2-kube-api-access-k4j6p" (OuterVolumeSpecName: "kube-api-access-k4j6p") pod "f49b325f-086e-4d70-93ec-6ecea97709a2" (UID: "f49b325f-086e-4d70-93ec-6ecea97709a2"). InnerVolumeSpecName "kube-api-access-k4j6p". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:45:16 addons-353302 kubelet[2180]: I0915 06:45:16.521356    2180 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-k4j6p\" (UniqueName: \"kubernetes.io/projected/f49b325f-086e-4d70-93ec-6ecea97709a2-kube-api-access-k4j6p\") on node \"addons-353302\" DevicePath \"\""
	Sep 15 06:45:16 addons-353302 kubelet[2180]: I0915 06:45:16.951635    2180 scope.go:117] "RemoveContainer" containerID="760945e0aec9e8579041b318a39409818fccc842924d525e41811a2061a8346f"
	Sep 15 06:45:16 addons-353302 kubelet[2180]: I0915 06:45:16.989472    2180 scope.go:117] "RemoveContainer" containerID="a7d9465e821ccd61f10834a0e4c8e540c9153b98a680c936c90a842c3198c131"
	Sep 15 06:45:17 addons-353302 kubelet[2180]: I0915 06:45:17.016076    2180 scope.go:117] "RemoveContainer" containerID="a7d9465e821ccd61f10834a0e4c8e540c9153b98a680c936c90a842c3198c131"
	Sep 15 06:45:17 addons-353302 kubelet[2180]: E0915 06:45:17.017704    2180 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: a7d9465e821ccd61f10834a0e4c8e540c9153b98a680c936c90a842c3198c131" containerID="a7d9465e821ccd61f10834a0e4c8e540c9153b98a680c936c90a842c3198c131"
	Sep 15 06:45:17 addons-353302 kubelet[2180]: I0915 06:45:17.017758    2180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"a7d9465e821ccd61f10834a0e4c8e540c9153b98a680c936c90a842c3198c131"} err="failed to get container status \"a7d9465e821ccd61f10834a0e4c8e540c9153b98a680c936c90a842c3198c131\": rpc error: code = Unknown desc = Error response from daemon: No such container: a7d9465e821ccd61f10834a0e4c8e540c9153b98a680c936c90a842c3198c131"
	
	
	==> storage-provisioner [7003db7b7db7] <==
	I0915 06:33:28.765263       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:33:29.022349       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:33:29.067415       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 06:33:29.943161       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 06:33:29.947621       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-353302_ceaf4d63-1ac4-451c-8aab-afdcaec1bd2f!
	I0915 06:33:29.969080       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2d6fc148-a7d6-4c44-82c0-c6f780af22da", APIVersion:"v1", ResourceVersion:"674", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-353302_ceaf4d63-1ac4-451c-8aab-afdcaec1bd2f became leader
	I0915 06:33:30.257411       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-353302_ceaf4d63-1ac4-451c-8aab-afdcaec1bd2f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-353302 -n addons-353302
helpers_test.go:261: (dbg) Run:  kubectl --context addons-353302 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-l5qg9 ingress-nginx-admission-patch-72wrn
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-353302 describe pod busybox ingress-nginx-admission-create-l5qg9 ingress-nginx-admission-patch-72wrn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-353302 describe pod busybox ingress-nginx-admission-create-l5qg9 ingress-nginx-admission-patch-72wrn: exit status 1 (125.115681ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-353302/192.168.49.2
	Start Time:       Sun, 15 Sep 2024 06:35:57 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pk8dn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-pk8dn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m22s                   default-scheduler  Successfully assigned default/busybox to addons-353302
	  Normal   Pulling    7m57s (x4 over 9m21s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m57s (x4 over 9m21s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m57s (x4 over 9m21s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m27s (x6 over 9m21s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m12s (x20 over 9m21s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-l5qg9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-72wrn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-353302 describe pod busybox ingress-nginx-admission-create-l5qg9 ingress-nginx-admission-patch-72wrn: exit status 1
--- FAIL: TestAddons/parallel/Registry (76.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (180.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
E0915 06:50:12.243817    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:50:12.252679    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:50:12.274330    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:50:12.295821    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:50:12.337368    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:50:12.418914    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:50:12.580686    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:50:12.902403    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
E0915 06:50:17.388387    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
E0915 06:50:53.233972    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
2024/09/15 06:51:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:234: (dbg) Non-zero exit: kubectl --context functional-913422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}: context deadline exceeded (1.351µs)
functional_test_tunnel_test.go:245: nginx-svc svc.status.loadBalancer.ingress never got an IP: context deadline exceeded
functional_test_tunnel_test.go:246: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc
functional_test_tunnel_test.go:250: failed to kubectl get svc nginx-svc:

                                                
                                                
-- stdout --
	NAME        TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
	nginx-svc   LoadBalancer   10.101.30.29   <pending>     80:32432/TCP   3m10s

                                                
                                                
-- /stdout --
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (180.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-913422 /tmp/TestFunctionalparallelMountCmdany-port2551516277/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726383025759317344" to /tmp/TestFunctionalparallelMountCmdany-port2551516277/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726383025759317344" to /tmp/TestFunctionalparallelMountCmdany-port2551516277/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726383025759317344" to /tmp/TestFunctionalparallelMountCmdany-port2551516277/001/test-1726383025759317344
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (630.423091ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (385.926279ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (452.701603ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (382.077129ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (491.850063ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0915 06:50:32.752482    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (409.361728ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (381.355977ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:125: /mount-9p did not appear within 14.199301579s: exit status 1
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (388.622891ms)

                                                
                                                
-- stdout --
	ls: cannot access '/mount-9p': No such file or directory
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-913422 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "sudo umount -f /mount-9p": exit status 1 (380.903966ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: no mount point specified.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:92: "out/minikube-linux-amd64 -p functional-913422 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-913422 /tmp/TestFunctionalparallelMountCmdany-port2551516277/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-913422 /tmp/TestFunctionalparallelMountCmdany-port2551516277/001:/mount-9p --alsologtostderr -v=1] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-913422 /tmp/TestFunctionalparallelMountCmdany-port2551516277/001:/mount-9p --alsologtostderr -v=1] stderr:
I0915 06:50:25.898645   43210 out.go:345] Setting OutFile to fd 1 ...
I0915 06:50:25.899095   43210 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:50:25.899181   43210 out.go:358] Setting ErrFile to fd 2...
I0915 06:50:25.899209   43210 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:50:25.899725   43210 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/bin
I0915 06:50:25.900300   43210 mustload.go:65] Loading cluster: functional-913422
I0915 06:50:25.901202   43210 config.go:182] Loaded profile config "functional-913422": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:50:25.902306   43210 cli_runner.go:164] Run: docker container inspect functional-913422 --format={{.State.Status}}
I0915 06:50:25.964562   43210 host.go:66] Checking if "functional-913422" exists ...
I0915 06:50:25.965236   43210 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0915 06:50:26.177711   43210 info.go:266] docker info: {ID:efb27d19-1e2c-434b-867e-6d44bc4ed6a4 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:false NGoroutines:58 SystemTime:2024-09-15 06:50:26.154803873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0915 06:50:26.177972   43210 cli_runner.go:164] Run: docker network inspect functional-913422 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0915 06:50:26.218807   43210 out.go:201] 
W0915 06:50:26.220703   43210 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0915 06:50:26.222401   43210 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (15.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (13.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-913422 /tmp/TestFunctionalparallelMountCmdspecific-port2689960488/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (564.565021ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (462.125475ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (377.626475ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (376.856671ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (470.439337ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (390.238332ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (391.907403ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:253: /mount-9p did not appear within 12.89210494s: exit status 1
functional_test_mount_test.go:220: "TestFunctional/parallel/MountCmd/specific-port" failed, getting debug info...
functional_test_mount_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (419.304156ms)

                                                
                                                
-- stdout --
	ls: cannot access '/mount-9p': No such file or directory
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:223: debugging command "out/minikube-linux-amd64 -p functional-913422 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "sudo umount -f /mount-9p": exit status 1 (395.044033ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: no mount point specified.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-913422 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-913422 /tmp/TestFunctionalparallelMountCmdspecific-port2689960488/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-amd64 mount -p functional-913422 /tmp/TestFunctionalparallelMountCmdspecific-port2689960488/001:/mount-9p --alsologtostderr -v=1 --port 46464] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-amd64 mount -p functional-913422 /tmp/TestFunctionalparallelMountCmdspecific-port2689960488/001:/mount-9p --alsologtostderr -v=1 --port 46464] stderr:
I0915 06:50:41.003807   43878 out.go:345] Setting OutFile to fd 1 ...
I0915 06:50:41.004100   43878 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:50:41.004119   43878 out.go:358] Setting ErrFile to fd 2...
I0915 06:50:41.004130   43878 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:50:41.004553   43878 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/bin
I0915 06:50:41.005085   43878 mustload.go:65] Loading cluster: functional-913422
I0915 06:50:41.005804   43878 config.go:182] Loaded profile config "functional-913422": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:50:41.006936   43878 cli_runner.go:164] Run: docker container inspect functional-913422 --format={{.State.Status}}
I0915 06:50:41.053040   43878 host.go:66] Checking if "functional-913422" exists ...
I0915 06:50:41.053784   43878 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0915 06:50:41.219935   43878 info.go:266] docker info: {ID:efb27d19-1e2c-434b-867e-6d44bc4ed6a4 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:false NGoroutines:58 SystemTime:2024-09-15 06:50:41.199347519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0915 06:50:41.220150   43878 cli_runner.go:164] Run: docker network inspect functional-913422 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0915 06:50:41.268647   43878 out.go:201] 
W0915 06:50:41.270469   43878 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0915 06:50:41.272334   43878 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/specific-port (13.82s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (14.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-913422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1979670047/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-913422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1979670047/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-913422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1979670047/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T" /mount1: exit status 1 (1.13296654s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T" /mount1: exit status 1 (367.844455ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T" /mount1: exit status 1 (377.048919ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T" /mount1: exit status 1 (402.689453ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T" /mount1: exit status 1 (370.890228ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T" /mount1: exit status 1 (1.233108417s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "findmnt -T" /mount1: exit status 1 (1.537134686s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:342: mount was not ready in time: exit status 1
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-913422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1979670047/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-913422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1979670047/001:/mount1 --alsologtostderr -v=1] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-913422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1979670047/001:/mount1 --alsologtostderr -v=1] stderr:
I0915 06:50:55.005885   44552 out.go:345] Setting OutFile to fd 1 ...
I0915 06:50:55.036946   44552 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:50:55.036973   44552 out.go:358] Setting ErrFile to fd 2...
I0915 06:50:55.036983   44552 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:50:55.037818   44552 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/bin
I0915 06:50:55.044109   44552 mustload.go:65] Loading cluster: functional-913422
I0915 06:50:55.044868   44552 config.go:182] Loaded profile config "functional-913422": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:50:55.048412   44552 cli_runner.go:164] Run: docker container inspect functional-913422 --format={{.State.Status}}
I0915 06:50:55.127766   44552 host.go:66] Checking if "functional-913422" exists ...
I0915 06:50:55.128254   44552 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0915 06:50:55.638587   44552 info.go:266] docker info: {ID:efb27d19-1e2c-434b-867e-6d44bc4ed6a4 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:false NGoroutines:58 SystemTime:2024-09-15 06:50:55.550243989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0915 06:50:55.638845   44552 cli_runner.go:164] Run: docker network inspect functional-913422 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0915 06:50:55.673226   44552 out.go:201] 
W0915 06:50:55.675448   44552 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0915 06:50:55.677121   44552 out.go:201] 
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-913422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1979670047/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-913422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1979670047/001:/mount2 --alsologtostderr -v=1] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-913422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1979670047/001:/mount2 --alsologtostderr -v=1] stderr:
I0915 06:50:54.969605   44553 out.go:345] Setting OutFile to fd 1 ...
I0915 06:50:54.974691   44553 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:50:54.975053   44553 out.go:358] Setting ErrFile to fd 2...
I0915 06:50:54.975116   44553 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:50:54.997247   44553 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/bin
I0915 06:50:54.998807   44553 mustload.go:65] Loading cluster: functional-913422
I0915 06:50:55.003147   44553 config.go:182] Loaded profile config "functional-913422": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:50:55.010922   44553 cli_runner.go:164] Run: docker container inspect functional-913422 --format={{.State.Status}}
I0915 06:50:55.088782   44553 host.go:66] Checking if "functional-913422" exists ...
I0915 06:50:55.089588   44553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0915 06:50:55.543876   44553 info.go:266] docker info: {ID:efb27d19-1e2c-434b-867e-6d44bc4ed6a4 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:false NGoroutines:58 SystemTime:2024-09-15 06:50:55.461441215 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0915 06:50:55.544114   44553 cli_runner.go:164] Run: docker network inspect functional-913422 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0915 06:50:55.601685   44553 out.go:201] 
W0915 06:50:55.603746   44553 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0915 06:50:55.605842   44553 out.go:201] 
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-913422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1979670047/001:/mount3 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-913422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1979670047/001:/mount3 --alsologtostderr -v=1] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-913422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1979670047/001:/mount3 --alsologtostderr -v=1] stderr:
I0915 06:50:54.998801   44554 out.go:345] Setting OutFile to fd 1 ...
I0915 06:50:55.006156   44554 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:50:55.006176   44554 out.go:358] Setting ErrFile to fd 2...
I0915 06:50:55.006186   44554 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:50:55.006578   44554 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/bin
I0915 06:50:55.007065   44554 mustload.go:65] Loading cluster: functional-913422
I0915 06:50:55.007802   44554 config.go:182] Loaded profile config "functional-913422": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:50:55.021440   44554 cli_runner.go:164] Run: docker container inspect functional-913422 --format={{.State.Status}}
I0915 06:50:55.145535   44554 host.go:66] Checking if "functional-913422" exists ...
I0915 06:50:55.146233   44554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0915 06:50:55.629795   44554 info.go:266] docker info: {ID:efb27d19-1e2c-434b-867e-6d44bc4ed6a4 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:false NGoroutines:58 SystemTime:2024-09-15 06:50:55.550243989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0915 06:50:55.630099   44554 cli_runner.go:164] Run: docker network inspect functional-913422 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0915 06:50:55.706820   44554 out.go:201] 
W0915 06:50:55.708617   44554 out.go:270] X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
X Exiting due to HOST_UNSUPPORTED: The host does not support filesystem 9p.
I0915 06:50:55.710540   44554 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/VerifyCleanup (14.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (103.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
E0915 06:52:56.145511    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-913422 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx-svc   LoadBalancer   10.101.30.29   <pending>     80:32432/TCP   4m54s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (103.22s)

                                                
                                    

Test pass (97/108)

Order passed test Duration
3 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.13
4 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.14
5 TestAddons/Setup 199.18
7 TestAddons/serial/Volcano 45.01
9 TestAddons/serial/GCPAuth/Namespaces 0.24
12 TestAddons/parallel/Ingress 21.13
13 TestAddons/parallel/InspektorGadget 11.43
14 TestAddons/parallel/MetricsServer 6.01
15 TestAddons/parallel/HelmTiller 11.2
17 TestAddons/parallel/CSI 31.77
18 TestAddons/parallel/Headlamp 19.35
19 TestAddons/parallel/CloudSpanner 5.68
20 TestAddons/parallel/LocalPath 55.41
21 TestAddons/parallel/NvidiaDevicePlugin 6.6
22 TestAddons/parallel/Yakd 12.2
23 TestAddons/StoppedEnableDisable 6.7
26 TestFunctional/serial/CopySyncFile 0.09
27 TestFunctional/serial/StartWithProxy 79.34
28 TestFunctional/serial/AuditLog 0
29 TestFunctional/serial/SoftStart 28.64
30 TestFunctional/serial/KubeContext 0.14
31 TestFunctional/serial/KubectlGetPods 0.12
34 TestFunctional/serial/CacheCmd/cache/add_remote 3.07
35 TestFunctional/serial/CacheCmd/cache/add_local 1.45
36 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.15
37 TestFunctional/serial/CacheCmd/cache/list 0.09
38 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.46
39 TestFunctional/serial/CacheCmd/cache/cache_reload 1.95
40 TestFunctional/serial/CacheCmd/cache/delete 0.18
41 TestFunctional/serial/MinikubeKubectlCmd 1.38
42 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.18
43 TestFunctional/serial/ExtraConfig 49.08
44 TestFunctional/serial/ComponentHealth 0.12
45 TestFunctional/serial/LogsCmd 1.68
46 TestFunctional/serial/LogsFileCmd 1.63
47 TestFunctional/serial/InvalidService 5.13
49 TestFunctional/parallel/ConfigCmd 0.98
50 TestFunctional/parallel/DashboardCmd 17.48
51 TestFunctional/parallel/DryRun 0.8
52 TestFunctional/parallel/InternationalLanguage 0.35
53 TestFunctional/parallel/StatusCmd 2.19
57 TestFunctional/parallel/ServiceCmdConnect 12.05
58 TestFunctional/parallel/AddonsCmd 0.22
59 TestFunctional/parallel/PersistentVolumeClaim 29.69
61 TestFunctional/parallel/SSHCmd 1.34
62 TestFunctional/parallel/CpCmd 4.32
63 TestFunctional/parallel/MySQL 35.22
64 TestFunctional/parallel/FileSync 0.4
65 TestFunctional/parallel/CertSync 2.7
69 TestFunctional/parallel/NodeLabels 0.12
71 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
73 TestFunctional/parallel/License 1.81
75 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.23
76 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
78 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.84
80 TestFunctional/parallel/ServiceCmd/DeployApp 7.27
81 TestFunctional/parallel/ServiceCmd/List 0.63
82 TestFunctional/parallel/ServiceCmd/JSONOutput 0.67
83 TestFunctional/parallel/ServiceCmd/HTTPS 0.58
84 TestFunctional/parallel/ServiceCmd/Format 0.51
85 TestFunctional/parallel/ServiceCmd/URL 0.54
86 TestFunctional/parallel/ProfileCmd/profile_not_create 0.63
87 TestFunctional/parallel/ProfileCmd/profile_list 0.62
88 TestFunctional/parallel/ProfileCmd/profile_json_output 0.58
92 TestFunctional/parallel/Version/short 0.1
93 TestFunctional/parallel/Version/components 1.65
94 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
95 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
96 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
97 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
98 TestFunctional/parallel/ImageCommands/ImageBuild 3.38
99 TestFunctional/parallel/ImageCommands/Setup 2.97
100 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.54
101 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.28
102 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.54
103 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
104 TestFunctional/parallel/ImageCommands/ImageRemove 0.61
105 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.98
106 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
107 TestFunctional/parallel/DockerEnv/bash 1.53
108 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.24
110 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.26
115 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.15
116 TestFunctional/delete_echo-server_images 0.07
117 TestFunctional/delete_my-image_image 0.03
118 TestFunctional/delete_minikube_cached_images 0.03
123 TestStartStop/group/cloud-shell/serial/FirstStart 77.64
124 TestStartStop/group/cloud-shell/serial/DeployApp 9.57
125 TestStartStop/group/cloud-shell/serial/EnableAddonWhileActive 1.34
126 TestStartStop/group/cloud-shell/serial/Stop 11.31
127 TestStartStop/group/cloud-shell/serial/EnableAddonAfterStop 0.3
128 TestStartStop/group/cloud-shell/serial/SecondStart 272.92
129 TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop 6.01
130 TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop 5.21
131 TestStartStop/group/cloud-shell/serial/VerifyKubernetesImages 0.33
132 TestStartStop/group/cloud-shell/serial/Pause 4.34
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.13s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-353302
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-353302: exit status 85 (129.847183ms)

                                                
                                                
-- stdout --
	* Profile "addons-353302" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-353302"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.13s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.14s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-353302
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-353302: exit status 85 (140.463095ms)

                                                
                                                
-- stdout --
	* Profile "addons-353302" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-353302"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.14s)

                                                
                                    
x
+
TestAddons/Setup (199.18s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-353302 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-353302 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m19.183048498s)
--- PASS: TestAddons/Setup (199.18s)

                                                
                                    
x
+
TestAddons/serial/Volcano (45.01s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 263.632729ms
addons_test.go:905: volcano-admission stabilized in 263.92407ms
addons_test.go:897: volcano-scheduler stabilized in 264.049769ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-sdmwn" [a87036a0-965d-4c71-a2a9-6f02e823c95f] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.00641986s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-qnjgs" [42857bfc-e090-4d0e-aab7-477cbf12402d] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005238485s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-5rl8x" [848286e1-8072-421d-b9d2-ca03b6807cfe] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.004936645s
addons_test.go:932: (dbg) Run:  kubectl --context addons-353302 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-353302 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-353302 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [6328519e-8cbd-480b-9d4f-bf1e66cd71f9] Pending
helpers_test.go:344: "test-job-nginx-0" [6328519e-8cbd-480b-9d4f-bf1e66cd71f9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [6328519e-8cbd-480b-9d4f-bf1e66cd71f9] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.007333881s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-353302 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-353302 addons disable volcano --alsologtostderr -v=1: (10.840214389s)
--- PASS: TestAddons/serial/Volcano (45.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-353302 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-353302 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-353302 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-353302 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-353302 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e7bd437c-a5b1-408c-89cf-09736558cfdc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e7bd437c-a5b1-408c-89cf-09736558cfdc] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.00614383s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-353302 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-353302 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-353302 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-353302 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-353302 addons disable ingress-dns --alsologtostderr -v=1: (1.575054575s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-353302 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-353302 addons disable ingress --alsologtostderr -v=1: (8.069120682s)
--- PASS: TestAddons/parallel/Ingress (21.13s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.43s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dxlhq" [1aa7255c-4f2d-4fc0-8e7e-f134b3d66019] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.009231323s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-353302
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-353302: (6.416307476s)
--- PASS: TestAddons/parallel/InspektorGadget (11.43s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 6.2788ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-tl5pc" [60336706-ef3c-4e47-b8a8-853c524e5125] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006480378s
addons_test.go:417: (dbg) Run:  kubectl --context addons-353302 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-353302 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.01s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.2s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.460113ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-7wvqg" [3b39209b-916c-4391-9fcb-048af767f63b] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008870939s
addons_test.go:475: (dbg) Run:  kubectl --context addons-353302 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-353302 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.410062311s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-353302 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.20s)

                                                
                                    
x
+
TestAddons/parallel/CSI (31.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 42.013252ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-353302 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353302 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353302 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-353302 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [44db16bf-dc4d-4920-ada4-7b4b7925b23f] Pending
helpers_test.go:344: "task-pv-pod" [44db16bf-dc4d-4920-ada4-7b4b7925b23f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [44db16bf-dc4d-4920-ada4-7b4b7925b23f] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004582738s
addons_test.go:590: (dbg) Run:  kubectl --context addons-353302 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-353302 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-353302 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-353302 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-353302 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353302 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353302 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353302 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353302 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-353302 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d567496a-8b9e-4edc-a161-f05d84e0e676] Pending
helpers_test.go:344: "task-pv-pod-restore" [d567496a-8b9e-4edc-a161-f05d84e0e676] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d567496a-8b9e-4edc-a161-f05d84e0e676] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005279588s
addons_test.go:632: (dbg) Run:  kubectl --context addons-353302 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-353302 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-353302 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-353302 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-353302 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.825450472s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-353302 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-amd64 -p addons-353302 addons disable volumesnapshots --alsologtostderr -v=1: (1.086497669s)
--- PASS: TestAddons/parallel/CSI (31.77s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-353302 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-353302 --alsologtostderr -v=1: (1.314850019s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-mp6s5" [a7c5a387-a856-46e7-9276-fad70b8ea006] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-mp6s5" [a7c5a387-a856-46e7-9276-fad70b8ea006] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-mp6s5" [a7c5a387-a856-46e7-9276-fad70b8ea006] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.005386918s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-353302 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-353302 addons disable headlamp --alsologtostderr -v=1: (6.026709118s)
--- PASS: TestAddons/parallel/Headlamp (19.35s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-p42bq" [74e94dd4-63b6-4e53-a5c1-98da8b7b2e9b] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005092133s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-353302
--- PASS: TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.41s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-353302 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-353302 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353302 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353302 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353302 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353302 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353302 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353302 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [24994a31-5219-47de-bc92-cb9b147a3350] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [24994a31-5219-47de-bc92-cb9b147a3350] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [24994a31-5219-47de-bc92-cb9b147a3350] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00456123s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-353302 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-353302 ssh "cat /opt/local-path-provisioner/pvc-59f8d0a8-be52-4426-9cd8-003f857fbb40_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-353302 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-353302 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-353302 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-353302 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.631653987s)
--- PASS: TestAddons/parallel/LocalPath (55.41s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-cqvk8" [1b48cd7f-dab4-4c09-ac78-8c9ed2c3e699] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.018092497s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-353302
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-xmhfm" [817b82c5-8c62-4292-baea-ba9be5118311] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004727846s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-353302 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-353302 addons disable yakd --alsologtostderr -v=1: (6.19717251s)
--- PASS: TestAddons/parallel/Yakd (12.20s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6.7s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-353302
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-353302: (6.275691834s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-353302
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-353302
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-353302
--- PASS: TestAddons/StoppedEnableDisable (6.70s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/files/etc/test/nested/copy/7850/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.09s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.34s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-913422 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-913422 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m19.231879943s)
--- PASS: TestFunctional/serial/StartWithProxy (79.34s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.64s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-913422 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-913422 --alsologtostderr -v=8: (28.620890902s)
functional_test.go:663: soft start took 28.636515051s for "functional-913422" cluster.
--- PASS: TestFunctional/serial/SoftStart (28.64s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.14s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-913422 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-913422 cache add registry.k8s.io/pause:3.1: (1.010263366s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-913422 cache add registry.k8s.io/pause:3.3: (1.118799716s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-913422 /tmp/TestFunctionalserialCacheCmdcacheadd_local2471385045/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 cache add minikube-local-cache-test:functional-913422
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 cache delete minikube-local-cache-test:functional-913422
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-913422
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (427.798958ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 kubectl -- --context functional-913422 get pods
functional_test.go:716: (dbg) Done: out/minikube-linux-amd64 -p functional-913422 kubectl -- --context functional-913422 get pods: (1.383418668s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-913422 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.18s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (49.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-913422 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-913422 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (49.081051717s)
functional_test.go:761: restart took 49.081206903s for "functional-913422" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (49.08s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-913422 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-913422 logs: (1.678144983s)
--- PASS: TestFunctional/serial/LogsCmd (1.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 logs --file /tmp/TestFunctionalserialLogsFileCmd2378041227/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-913422 logs --file /tmp/TestFunctionalserialLogsFileCmd2378041227/001/logs.txt: (1.629623897s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.63s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.13s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-913422 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-913422
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-913422: exit status 115 (635.454162ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30563 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_5b55102efd84289233ffc613c137836b410b4e4d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-913422 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-913422 delete -f testdata/invalidsvc.yaml: (1.174023029s)
--- PASS: TestFunctional/serial/InvalidService (5.13s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 config get cpus: exit status 14 (139.791793ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 config get cpus: exit status 14 (190.703113ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-913422 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-913422 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 45692: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.48s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-913422 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-913422 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (299.82588ms)

                                                
                                                
-- stdout --
	* [functional-913422] minikube v1.34.0 on Ubuntu 22.04 (amd64)
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19644-430/kubeconfig
	  - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19644-430/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_WANTUPDATENOTIFICATION=false
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 06:51:11.535784   45416 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:51:11.536072   45416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:51:11.536088   45416 out.go:358] Setting ErrFile to fd 2...
	I0915 06:51:11.536099   45416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:51:11.536518   45416 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/bin
	I0915 06:51:11.537187   45416 out.go:352] Setting JSON to false
	I0915 06:51:11.538210   45416 start.go:129] hostinfo: {"hostname":"cs-905301410258-default","uptime":3345,"bootTime":1726379726,"procs":91,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.1.100+","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"guest","hostId":"88b15d6b-fddc-40bb-b1ad-a90cb2566e38"}
	I0915 06:51:11.538307   45416 start.go:139] virtualization:  guest
	I0915 06:51:11.542331   45416 out.go:177] * [functional-913422] minikube v1.34.0 on Ubuntu 22.04 (amd64)
	I0915 06:51:11.545701   45416 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:51:11.545804   45416 notify.go:220] Checking for updates...
	I0915 06:51:11.553699   45416 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:51:11.557135   45416 out.go:177]   - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19644-430/kubeconfig
	I0915 06:51:11.564387   45416 out.go:177]   - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19644-430/.minikube
	I0915 06:51:11.570264   45416 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 06:51:11.574223   45416 out.go:177]   - MINIKUBE_WANTUPDATENOTIFICATION=false
	I0915 06:51:11.579092   45416 config.go:182] Loaded profile config "functional-913422": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 06:51:11.580393   45416 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:51:11.622149   45416 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0915 06:51:11.622316   45416 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:51:11.728750   45416 info.go:266] docker info: {ID:efb27d19-1e2c-434b-867e-6d44bc4ed6a4 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:false NGoroutines:58 SystemTime:2024-09-15 06:51:11.70615675 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:51:11.728934   45416 docker.go:318] overlay module found
	I0915 06:51:11.733375   45416 out.go:177] * Using the docker driver based on existing profile
	I0915 06:51:11.737353   45416 start.go:297] selected driver: docker
	I0915 06:51:11.737386   45416 start.go:901] validating driver "docker" against &{Name:functional-913422 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-913422 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:51:11.737564   45416 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:51:11.741499   45416 out.go:201] 
	W0915 06:51:11.744736   45416 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0915 06:51:11.748737   45416 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-913422 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-913422 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-913422 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (349.21145ms)

                                                
                                                
-- stdout --
	* [functional-913422] minikube v1.34.0 sur Ubuntu 22.04 (amd64)
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19644-430/kubeconfig
	  - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19644-430/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_WANTUPDATENOTIFICATION=false
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 06:51:11.233109   45369 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:51:11.233397   45369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:51:11.233414   45369 out.go:358] Setting ErrFile to fd 2...
	I0915 06:51:11.233423   45369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:51:11.233939   45369 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/bin
	I0915 06:51:11.234629   45369 out.go:352] Setting JSON to false
	I0915 06:51:11.236339   45369 start.go:129] hostinfo: {"hostname":"cs-905301410258-default","uptime":3345,"bootTime":1726379726,"procs":91,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.1.100+","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"guest","hostId":"88b15d6b-fddc-40bb-b1ad-a90cb2566e38"}
	I0915 06:51:11.236434   45369 start.go:139] virtualization:  guest
	I0915 06:51:11.243048   45369 out.go:177] * [functional-913422] minikube v1.34.0 sur Ubuntu 22.04 (amd64)
	I0915 06:51:11.246728   45369 notify.go:220] Checking for updates...
	I0915 06:51:11.247017   45369 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:51:11.250701   45369 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:51:11.255524   45369 out.go:177]   - KUBECONFIG=/home/g528047478195_compute/minikube-integration/19644-430/kubeconfig
	I0915 06:51:11.264978   45369 out.go:177]   - MINIKUBE_HOME=/home/g528047478195_compute/minikube-integration/19644-430/.minikube
	I0915 06:51:11.269442   45369 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 06:51:11.274481   45369 out.go:177]   - MINIKUBE_WANTUPDATENOTIFICATION=false
	I0915 06:51:11.279180   45369 config.go:182] Loaded profile config "functional-913422": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 06:51:11.280641   45369 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:51:11.324763   45369 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0915 06:51:11.324913   45369 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:51:11.417276   45369 info.go:266] docker info: {ID:efb27d19-1e2c-434b-867e-6d44bc4ed6a4 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:false NGoroutines:58 SystemTime:2024-09-15 06:51:11.400474328 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.1.100+ OperatingSystem:Ubuntu 22.04.4 LTS (containerized) OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:2 MemTotal:8337182720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-905301410258-default Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:51:11.417519   45369 docker.go:318] overlay module found
	I0915 06:51:11.426762   45369 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0915 06:51:11.431363   45369 start.go:297] selected driver: docker
	I0915 06:51:11.431406   45369 start.go:901] validating driver "docker" against &{Name:functional-913422 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-913422 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cgroups-per-qos Value:false} {Component:kubelet Key:enforce-node-allocatable Value:""}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/g528047478195_compute:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:51:11.431616   45369 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:51:11.437005   45369 out.go:201] 
	W0915 06:51:11.441204   45369 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0915 06:51:11.444645   45369 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-913422 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-913422 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-zqgts" [f9e6eed5-fca7-4d9a-bd07-c15acab575a9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-zqgts" [f9e6eed5-fca7-4d9a-bd07-c15acab575a9] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.005190322s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 service hello-node-connect --url
E0915 06:50:13.544660    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32551
functional_test.go:1675: http://192.168.49.2:32551: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-zqgts

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32551
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.05s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [520b210e-5e2d-4536-aa4b-21d3ed1ff9f2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005351979s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-913422 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-913422 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-913422 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-913422 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0ee75319-1b00-4d20-be39-eee862f22adb] Pending
helpers_test.go:344: "sp-pod" [0ee75319-1b00-4d20-be39-eee862f22adb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0ee75319-1b00-4d20-be39-eee862f22adb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.005472796s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-913422 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-913422 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-913422 delete -f testdata/storage-provisioner/pod.yaml: (1.366024772s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-913422 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1b147aa6-a3b9-4983-8b4e-24d1ee40d17b] Pending
helpers_test.go:344: "sp-pod" [1b147aa6-a3b9-4983-8b4e-24d1ee40d17b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1b147aa6-a3b9-4983-8b4e-24d1ee40d17b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005569984s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-913422 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.69s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (4.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh -n functional-913422 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 cp functional-913422:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd100382789/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh -n functional-913422 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh -n functional-913422 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (35.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-913422 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-hgrlx" [f3d11606-4f4e-439d-894d-9ef6b4cf3129] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-hgrlx" [f3d11606-4f4e-439d-894d-9ef6b4cf3129] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 28.006587567s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-913422 exec mysql-6cdb49bbb-hgrlx -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-913422 exec mysql-6cdb49bbb-hgrlx -- mysql -ppassword -e "show databases;": exit status 1 (326.651456ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-913422 exec mysql-6cdb49bbb-hgrlx -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-913422 exec mysql-6cdb49bbb-hgrlx -- mysql -ppassword -e "show databases;": exit status 1 (367.187945ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-913422 exec mysql-6cdb49bbb-hgrlx -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-913422 exec mysql-6cdb49bbb-hgrlx -- mysql -ppassword -e "show databases;": exit status 1 (238.470289ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-913422 exec mysql-6cdb49bbb-hgrlx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (35.22s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7850/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "sudo cat /etc/test/nested/copy/7850/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7850.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "sudo cat /etc/ssl/certs/7850.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7850.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "sudo cat /usr/share/ca-certificates/7850.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/78502.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "sudo cat /etc/ssl/certs/78502.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/78502.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "sudo cat /usr/share/ca-certificates/78502.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-913422 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh "sudo systemctl is-active crio": exit status 1 (426.597713ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2288: (dbg) Done: out/minikube-linux-amd64 license: (1.777664701s)
--- PASS: TestFunctional/parallel/License (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-913422 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-913422 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-913422 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 40990: os: process already finished
helpers_test.go:508: unable to kill pid 40820: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-913422 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-913422 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-913422 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d50a30ae-2c0f-4065-b053-fb1c310f69a2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d50a30ae-2c0f-4065-b053-fb1c310f69a2] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.005862287s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-913422 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-913422 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-w42n6" [a86127dc-b033-4bb8-9306-ef52c9352213] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E0915 06:50:14.826983    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-6b9f76b5c7-w42n6" [a86127dc-b033-4bb8-9306-ef52c9352213] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004809171s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 service list -o json
functional_test.go:1494: Took "670.402232ms" to run "out/minikube-linux-amd64 -p functional-913422 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 service --namespace=default --https --url hello-node
E0915 06:50:22.510601    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1522: found endpoint: https://192.168.49.2:31953
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31953
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "526.37559ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "91.847305ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "479.225103ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "99.41351ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-amd64 -p functional-913422 version -o=json --components: (1.646535114s)
--- PASS: TestFunctional/parallel/Version/components (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-913422 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-913422
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-913422
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-913422 image ls --format short --alsologtostderr:
I0915 06:52:25.202935   48282 out.go:345] Setting OutFile to fd 1 ...
I0915 06:52:25.203093   48282 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:52:25.203111   48282 out.go:358] Setting ErrFile to fd 2...
I0915 06:52:25.203122   48282 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:52:25.203479   48282 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/bin
I0915 06:52:25.204368   48282 config.go:182] Loaded profile config "functional-913422": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:52:25.204571   48282 config.go:182] Loaded profile config "functional-913422": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:52:25.205208   48282 cli_runner.go:164] Run: docker container inspect functional-913422 --format={{.State.Status}}
I0915 06:52:25.234993   48282 ssh_runner.go:195] Run: systemctl --version
I0915 06:52:25.235474   48282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-913422
I0915 06:52:25.264242   48282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/functional-913422/id_rsa Username:docker}
I0915 06:52:25.368723   48282 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-913422 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| docker.io/library/minikube-local-cache-test | functional-913422 | 32a626bd37785 | 30B    |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| docker.io/kicbase/echo-server               | functional-913422 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-913422 image ls --format table --alsologtostderr:
I0915 06:52:25.816008   48345 out.go:345] Setting OutFile to fd 1 ...
I0915 06:52:25.816319   48345 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:52:25.816375   48345 out.go:358] Setting ErrFile to fd 2...
I0915 06:52:25.816403   48345 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:52:25.816896   48345 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/bin
I0915 06:52:25.819210   48345 config.go:182] Loaded profile config "functional-913422": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:52:25.819500   48345 config.go:182] Loaded profile config "functional-913422": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:52:25.820402   48345 cli_runner.go:164] Run: docker container inspect functional-913422 --format={{.State.Status}}
I0915 06:52:25.848874   48345 ssh_runner.go:195] Run: systemctl --version
I0915 06:52:25.848971   48345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-913422
I0915 06:52:25.888786   48345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/functional-913422/id_rsa Username:docker}
I0915 06:52:25.990263   48345 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-913422 image ls --format json --alsologtostderr:
[{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"32a626bd37785d8a0723afdd79943d494ab0421286235462682ed6622fe8221d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-913422"],"size":"30"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7
f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/ngi
nx:alpine"],"size":"43200000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-913422"],"size":"4940000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"}
]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-913422 image ls --format json --alsologtostderr:
I0915 06:52:25.496325   48314 out.go:345] Setting OutFile to fd 1 ...
I0915 06:52:25.496546   48314 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:52:25.496565   48314 out.go:358] Setting ErrFile to fd 2...
I0915 06:52:25.496576   48314 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:52:25.496887   48314 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/bin
I0915 06:52:25.497786   48314 config.go:182] Loaded profile config "functional-913422": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:52:25.497981   48314 config.go:182] Loaded profile config "functional-913422": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:52:25.498691   48314 cli_runner.go:164] Run: docker container inspect functional-913422 --format={{.State.Status}}
I0915 06:52:25.527541   48314 ssh_runner.go:195] Run: systemctl --version
I0915 06:52:25.527653   48314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-913422
I0915 06:52:25.553924   48314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/functional-913422/id_rsa Username:docker}
I0915 06:52:25.672084   48314 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-913422 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 32a626bd37785d8a0723afdd79943d494ab0421286235462682ed6622fe8221d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-913422
size: "30"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-913422
size: "4940000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-913422 image ls --format yaml --alsologtostderr:
I0915 06:52:24.873016   48249 out.go:345] Setting OutFile to fd 1 ...
I0915 06:52:24.873399   48249 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:52:24.873420   48249 out.go:358] Setting ErrFile to fd 2...
I0915 06:52:24.873431   48249 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:52:24.873742   48249 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/bin
I0915 06:52:24.874640   48249 config.go:182] Loaded profile config "functional-913422": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:52:24.874876   48249 config.go:182] Loaded profile config "functional-913422": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:52:24.875523   48249 cli_runner.go:164] Run: docker container inspect functional-913422 --format={{.State.Status}}
I0915 06:52:24.907561   48249 ssh_runner.go:195] Run: systemctl --version
I0915 06:52:24.907665   48249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-913422
I0915 06:52:24.936053   48249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/functional-913422/id_rsa Username:docker}
I0915 06:52:25.042126   48249 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-913422 ssh pgrep buildkitd: exit status 1 (422.023231ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 image build -t localhost/my-image:functional-913422 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-913422 image build -t localhost/my-image:functional-913422 testdata/build --alsologtostderr: (2.660491855s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-913422 image build -t localhost/my-image:functional-913422 testdata/build --alsologtostderr:
I0915 06:52:26.547407   48439 out.go:345] Setting OutFile to fd 1 ...
I0915 06:52:26.547717   48439 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:52:26.547769   48439 out.go:358] Setting ErrFile to fd 2...
I0915 06:52:26.547798   48439 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:52:26.548087   48439 root.go:338] Updating PATH: /home/g528047478195_compute/minikube-integration/19644-430/.minikube/bin
I0915 06:52:26.549031   48439 config.go:182] Loaded profile config "functional-913422": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:52:26.605406   48439 config.go:182] Loaded profile config "functional-913422": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:52:26.606521   48439 cli_runner.go:164] Run: docker container inspect functional-913422 --format={{.State.Status}}
I0915 06:52:26.636106   48439 ssh_runner.go:195] Run: systemctl --version
I0915 06:52:26.636268   48439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-913422
I0915 06:52:26.671059   48439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/g528047478195_compute/minikube-integration/19644-430/.minikube/machines/functional-913422/id_rsa Username:docker}
I0915 06:52:26.778166   48439 build_images.go:161] Building image from path: /tmp/build.2451489756.tar
I0915 06:52:26.778361   48439 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0915 06:52:26.794795   48439 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2451489756.tar
I0915 06:52:26.800709   48439 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2451489756.tar: stat -c "%s %y" /var/lib/minikube/build/build.2451489756.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2451489756.tar': No such file or directory
I0915 06:52:26.800748   48439 ssh_runner.go:362] scp /tmp/build.2451489756.tar --> /var/lib/minikube/build/build.2451489756.tar (3072 bytes)
I0915 06:52:26.852492   48439 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2451489756
I0915 06:52:26.875740   48439 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2451489756 -xf /var/lib/minikube/build/build.2451489756.tar
I0915 06:52:26.896843   48439 docker.go:360] Building image: /var/lib/minikube/build/build.2451489756
I0915 06:52:26.896968   48439 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-913422 /var/lib/minikube/build/build.2451489756
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:3d4514174a37cadb0575210635c939d22bcec647fe99ded38ae9e1331c4454a8 done
#8 naming to localhost/my-image:functional-913422 done
#8 DONE 0.1s
I0915 06:52:29.080053   48439 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-913422 /var/lib/minikube/build/build.2451489756: (2.183052519s)
I0915 06:52:29.080235   48439 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2451489756
I0915 06:52:29.096650   48439 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2451489756.tar
I0915 06:52:29.112084   48439 build_images.go:217] Built localhost/my-image:functional-913422 from /tmp/build.2451489756.tar
I0915 06:52:29.112130   48439 build_images.go:133] succeeded building to: functional-913422
I0915 06:52:29.112140   48439 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
E0915 06:51:34.195428    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.931871602s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-913422
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 image load --daemon kicbase/echo-server:functional-913422 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-913422 image load --daemon kicbase/echo-server:functional-913422 --alsologtostderr: (1.22398107s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 image load --daemon kicbase/echo-server:functional-913422 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Done: docker pull kicbase/echo-server:latest: (1.348185559s)
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-913422
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 image load --daemon kicbase/echo-server:functional-913422 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 image save kicbase/echo-server:functional-913422 /home/g528047478195_compute/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 image rm kicbase/echo-server:functional-913422 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 image load /home/g528047478195_compute/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-913422
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 image save --daemon kicbase/echo-server:functional-913422 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-913422
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-913422 docker-env) && out/minikube-linux-amd64 status -p functional-913422"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-913422 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-913422 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-913422 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.15s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-913422
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-913422
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-913422
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/FirstStart (77.64s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p cloud-shell-655089 --memory=2200 --alsologtostderr --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0915 06:54:30.991614    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/functional-913422/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:54:30.998077    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/functional-913422/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:54:31.009599    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/functional-913422/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:54:31.031784    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/functional-913422/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:54:31.073368    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/functional-913422/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:54:31.155177    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/functional-913422/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:54:31.316770    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/functional-913422/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:54:31.639066    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/functional-913422/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:54:32.281182    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/functional-913422/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:54:33.562578    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/functional-913422/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:54:36.124467    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/functional-913422/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:54:41.247378    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/functional-913422/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:54:51.489384    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/functional-913422/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:55:11.971485    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/functional-913422/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:55:12.244255    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:55:39.987026    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p cloud-shell-655089 --memory=2200 --alsologtostderr --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m17.607531705s)
--- PASS: TestStartStop/group/cloud-shell/serial/FirstStart (77.64s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/DeployApp (9.57s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context cloud-shell-655089 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/cloud-shell/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [187c133a-21b5-434a-9d1e-429887f8afbd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [187c133a-21b5-434a-9d1e-429887f8afbd] Running
E0915 06:55:52.933515    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/functional-913422/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/cloud-shell/serial/DeployApp: integration-test=busybox healthy within 9.004750594s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context cloud-shell-655089 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/cloud-shell/serial/DeployApp (9.57s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p cloud-shell-655089 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p cloud-shell-655089 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.176067862s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context cloud-shell-655089 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/cloud-shell/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/Stop (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p cloud-shell-655089 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p cloud-shell-655089 --alsologtostderr -v=3: (11.314596595s)
--- PASS: TestStartStop/group/cloud-shell/serial/Stop (11.31s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p cloud-shell-655089 -n cloud-shell-655089
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p cloud-shell-655089 -n cloud-shell-655089: exit status 7 (139.106133ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p cloud-shell-655089 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/cloud-shell/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/SecondStart (272.92s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p cloud-shell-655089 --memory=2200 --alsologtostderr --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0915 06:57:14.855533    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/functional-913422/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:59:30.991859    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/functional-913422/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:59:58.697327    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/functional-913422/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:00:12.244005    7850 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/g528047478195_compute/minikube-integration/19644-430/.minikube/profiles/addons-353302/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p cloud-shell-655089 --memory=2200 --alsologtostderr --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m32.365958502s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p cloud-shell-655089 -n cloud-shell-655089
--- PASS: TestStartStop/group/cloud-shell/serial/SecondStart (272.92s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-fw8bl" [6222264e-28cb-4b4f-ac85-f17993ed774a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005406842s
--- PASS: TestStartStop/group/cloud-shell/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-fw8bl" [6222264e-28cb-4b4f-ac85-f17993ed774a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005043033s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context cloud-shell-655089 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/cloud-shell/serial/AddonExistsAfterStop (5.21s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p cloud-shell-655089 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/cloud-shell/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/cloud-shell/serial/Pause (4.34s)

                                                
                                                
=== RUN   TestStartStop/group/cloud-shell/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p cloud-shell-655089 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p cloud-shell-655089 -n cloud-shell-655089
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p cloud-shell-655089 -n cloud-shell-655089: exit status 2 (460.357336ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p cloud-shell-655089 -n cloud-shell-655089
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p cloud-shell-655089 -n cloud-shell-655089: exit status 2 (470.469686ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p cloud-shell-655089 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p cloud-shell-655089 -n cloud-shell-655089
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p cloud-shell-655089 -n cloud-shell-655089
--- PASS: TestStartStop/group/cloud-shell/serial/Pause (4.34s)

                                                
                                    

Test skip (5/108)

x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
Copied to clipboard