Test Report: Docker_Linux 20889

                    
                      64b89cb3fe94067f1fb4652bef98266b0ead990d:2025-06-05:39905
                    
                

Test fail (1/347)

Order failed test Duration
29 TestAddons/serial/Volcano 197.81
x
+
TestAddons/serial/Volcano (197.81s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:876: volcano-admission stabilized in 7.340812ms
addons_test.go:868: volcano-scheduler stabilized in 7.379614ms
addons_test.go:884: volcano-controller stabilized in 7.518294ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-854568c9bb-vpbr4" [3a854d0f-4b22-4af2-b13b-d131108020d6] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003005448s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-55859c8887-rd6kw" [00b24296-3c74-46a8-aa15-82029ef08a08] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003014802s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-7b774bbd55-9ssw4" [a8b755c7-b700-40b1-8b38-cd9ef92033ca] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002725517s
addons_test.go:903: (dbg) Run:  kubectl --context addons-191833 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-191833 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-191833 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:935: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:935: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-191833 -n addons-191833
addons_test.go:935: TestAddons/serial/Volcano: showing logs for failed pods as of 2025-06-05 18:37:11.009973652 +0000 UTC m=+371.615770397
addons_test.go:936: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-191833
helpers_test.go:235: (dbg) docker inspect addons-191833:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0b19e3190193b5d214796b76f1cf3fc249648a6ed6b98c14d6c76fb536c051da",
	        "Created": "2025-06-05T18:31:35.737847082Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 15167,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-06-05T18:31:35.770298821Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:795ea6a69ce682944ae3f1bc8b732217eb065d3b981db69c80fd26ffbf05eda9",
	        "ResolvConfPath": "/var/lib/docker/containers/0b19e3190193b5d214796b76f1cf3fc249648a6ed6b98c14d6c76fb536c051da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0b19e3190193b5d214796b76f1cf3fc249648a6ed6b98c14d6c76fb536c051da/hostname",
	        "HostsPath": "/var/lib/docker/containers/0b19e3190193b5d214796b76f1cf3fc249648a6ed6b98c14d6c76fb536c051da/hosts",
	        "LogPath": "/var/lib/docker/containers/0b19e3190193b5d214796b76f1cf3fc249648a6ed6b98c14d6c76fb536c051da/0b19e3190193b5d214796b76f1cf3fc249648a6ed6b98c14d6c76fb536c051da-json.log",
	        "Name": "/addons-191833",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-191833:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-191833",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0b19e3190193b5d214796b76f1cf3fc249648a6ed6b98c14d6c76fb536c051da",
	                "LowerDir": "/var/lib/docker/overlay2/248f7fe439fd9da81ea5e628247cbf44690b10b186006b97a5a392af46d8d081-init/diff:/var/lib/docker/overlay2/fc55c275e3d1a836ceba89fa93ef66fb4dffe8802095de12294feb4e936450d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/248f7fe439fd9da81ea5e628247cbf44690b10b186006b97a5a392af46d8d081/merged",
	                "UpperDir": "/var/lib/docker/overlay2/248f7fe439fd9da81ea5e628247cbf44690b10b186006b97a5a392af46d8d081/diff",
	                "WorkDir": "/var/lib/docker/overlay2/248f7fe439fd9da81ea5e628247cbf44690b10b186006b97a5a392af46d8d081/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-191833",
	                "Source": "/var/lib/docker/volumes/addons-191833/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-191833",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-191833",
	                "name.minikube.sigs.k8s.io": "addons-191833",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4cd532f9facde64d748b9d03ed73ffcfa9e9810b31d07f71ef368066e9b4fc32",
	            "SandboxKey": "/var/run/docker/netns/4cd532f9facd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-191833": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:f0:07:a6:91:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4d985472f556ceb79fee8b6f5366a9c11ed5a0b0e951dd70cd85b2c239157c77",
	                    "EndpointID": "2c19ccc016a5569b69a345358a5215d7900af74f663c24e58e859e79563faecf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-191833",
	                        "0b19e3190193"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-191833 -n addons-191833
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-191833 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-191833 logs -n 25: (1.064919283s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-810688   | jenkins | v1.36.0 | 05 Jun 25 18:30 UTC |                     |
	|         | -p download-only-810688              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.36.0 | 05 Jun 25 18:31 UTC | 05 Jun 25 18:31 UTC |
	| delete  | -p download-only-810688              | download-only-810688   | jenkins | v1.36.0 | 05 Jun 25 18:31 UTC | 05 Jun 25 18:31 UTC |
	| start   | -o=json --download-only              | download-only-887660   | jenkins | v1.36.0 | 05 Jun 25 18:31 UTC |                     |
	|         | -p download-only-887660              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.1         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.36.0 | 05 Jun 25 18:31 UTC | 05 Jun 25 18:31 UTC |
	| delete  | -p download-only-887660              | download-only-887660   | jenkins | v1.36.0 | 05 Jun 25 18:31 UTC | 05 Jun 25 18:31 UTC |
	| delete  | -p download-only-810688              | download-only-810688   | jenkins | v1.36.0 | 05 Jun 25 18:31 UTC | 05 Jun 25 18:31 UTC |
	| delete  | -p download-only-887660              | download-only-887660   | jenkins | v1.36.0 | 05 Jun 25 18:31 UTC | 05 Jun 25 18:31 UTC |
	| start   | --download-only -p                   | download-docker-744836 | jenkins | v1.36.0 | 05 Jun 25 18:31 UTC |                     |
	|         | download-docker-744836               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p download-docker-744836            | download-docker-744836 | jenkins | v1.36.0 | 05 Jun 25 18:31 UTC | 05 Jun 25 18:31 UTC |
	| start   | --download-only -p                   | binary-mirror-722276   | jenkins | v1.36.0 | 05 Jun 25 18:31 UTC |                     |
	|         | binary-mirror-722276                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39967               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-722276              | binary-mirror-722276   | jenkins | v1.36.0 | 05 Jun 25 18:31 UTC | 05 Jun 25 18:31 UTC |
	| addons  | enable dashboard -p                  | addons-191833          | jenkins | v1.36.0 | 05 Jun 25 18:31 UTC |                     |
	|         | addons-191833                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-191833          | jenkins | v1.36.0 | 05 Jun 25 18:31 UTC |                     |
	|         | addons-191833                        |                        |         |         |                     |                     |
	| start   | -p addons-191833 --wait=true         | addons-191833          | jenkins | v1.36.0 | 05 Jun 25 18:31 UTC | 05 Jun 25 18:33 UTC |
	|         | --memory=4096 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=registry-creds              |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/05 18:31:11
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0605 18:31:11.578930   14557 out.go:345] Setting OutFile to fd 1 ...
	I0605 18:31:11.579194   14557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0605 18:31:11.579206   14557 out.go:358] Setting ErrFile to fd 2...
	I0605 18:31:11.579211   14557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0605 18:31:11.579413   14557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20889-6302/.minikube/bin
	I0605 18:31:11.579991   14557 out.go:352] Setting JSON to false
	I0605 18:31:11.580837   14557 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":819,"bootTime":1749147453,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0605 18:31:11.580932   14557 start.go:140] virtualization: kvm guest
	I0605 18:31:11.582962   14557 out.go:177] * [addons-191833] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0605 18:31:11.584441   14557 notify.go:220] Checking for updates...
	I0605 18:31:11.584447   14557 out.go:177]   - MINIKUBE_LOCATION=20889
	I0605 18:31:11.585763   14557 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0605 18:31:11.587242   14557 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20889-6302/kubeconfig
	I0605 18:31:11.588653   14557 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20889-6302/.minikube
	I0605 18:31:11.590087   14557 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0605 18:31:11.591266   14557 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0605 18:31:11.592451   14557 driver.go:404] Setting default libvirt URI to qemu:///system
	I0605 18:31:11.613490   14557 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0605 18:31:11.613573   14557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:31:11.660307   14557 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2025-06-05 18:31:11.652062814 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0605 18:31:11.660412   14557 docker.go:318] overlay module found
	I0605 18:31:11.661991   14557 out.go:177] * Using the docker driver based on user configuration
	I0605 18:31:11.663114   14557 start.go:304] selected driver: docker
	I0605 18:31:11.663127   14557 start.go:908] validating driver "docker" against <nil>
	I0605 18:31:11.663138   14557 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0605 18:31:11.663964   14557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:31:11.710409   14557 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2025-06-05 18:31:11.702560185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0605 18:31:11.710542   14557 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0605 18:31:11.710751   14557 start_flags.go:990] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0605 18:31:11.712349   14557 out.go:177] * Using Docker driver with root privileges
	I0605 18:31:11.713471   14557 cni.go:84] Creating CNI manager for ""
	I0605 18:31:11.713538   14557 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0605 18:31:11.713552   14557 start_flags.go:334] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0605 18:31:11.713633   14557 start.go:347] cluster config:
	{Name:addons-191833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.1 ClusterName:addons-191833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Aut
oPauseInterval:1m0s}
	I0605 18:31:11.715120   14557 out.go:177] * Starting "addons-191833" primary control-plane node in "addons-191833" cluster
	I0605 18:31:11.716277   14557 cache.go:121] Beginning downloading kic base image for docker with docker
	I0605 18:31:11.717471   14557 out.go:177] * Pulling base image v0.0.47 ...
	I0605 18:31:11.718530   14557 preload.go:131] Checking if preload exists for k8s version v1.33.1 and runtime docker
	I0605 18:31:11.718576   14557 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20889-6302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.1-docker-overlay2-amd64.tar.lz4
	I0605 18:31:11.718584   14557 cache.go:56] Caching tarball of preloaded images
	I0605 18:31:11.718646   14557 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b in local docker daemon
	I0605 18:31:11.718671   14557 preload.go:172] Found /home/jenkins/minikube-integration/20889-6302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0605 18:31:11.718683   14557 cache.go:59] Finished verifying existence of preloaded tar for v1.33.1 on docker
	I0605 18:31:11.719012   14557 profile.go:143] Saving config to /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/config.json ...
	I0605 18:31:11.719036   14557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/config.json: {Name:mk5b34a87b3fb8ee2a7f627d5a1c0d1466e0ae7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 18:31:11.733657   14557 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b to local cache
	I0605 18:31:11.733763   14557 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b in local cache directory
	I0605 18:31:11.733778   14557 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b in local cache directory, skipping pull
	I0605 18:31:11.733782   14557 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b exists in cache, skipping pull
	I0605 18:31:11.733791   14557 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b as a tarball
	I0605 18:31:11.733797   14557 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b from local cache
	I0605 18:31:23.826101   14557 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b from cached tarball
	I0605 18:31:23.826144   14557 cache.go:230] Successfully downloaded all kic artifacts
	I0605 18:31:23.826190   14557 start.go:360] acquireMachinesLock for addons-191833: {Name:mk467530bfa9f926ed35c22d539367fd22b53798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:31:23.826314   14557 start.go:364] duration metric: took 103.165µs to acquireMachinesLock for "addons-191833"
	I0605 18:31:23.826343   14557 start.go:93] Provisioning new machine with config: &{Name:addons-191833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.1 ClusterName:addons-191833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0605 18:31:23.826440   14557 start.go:125] createHost starting for "" (driver="docker")
	I0605 18:31:23.828028   14557 out.go:235] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0605 18:31:23.828274   14557 start.go:159] libmachine.API.Create for "addons-191833" (driver="docker")
	I0605 18:31:23.828303   14557 client.go:168] LocalClient.Create starting
	I0605 18:31:23.828391   14557 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20889-6302/.minikube/certs/ca.pem
	I0605 18:31:23.924560   14557 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20889-6302/.minikube/certs/cert.pem
	I0605 18:31:24.385611   14557 cli_runner.go:164] Run: docker network inspect addons-191833 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0605 18:31:24.400505   14557 cli_runner.go:211] docker network inspect addons-191833 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0605 18:31:24.400568   14557 network_create.go:284] running [docker network inspect addons-191833] to gather additional debugging logs...
	I0605 18:31:24.400585   14557 cli_runner.go:164] Run: docker network inspect addons-191833
	W0605 18:31:24.415695   14557 cli_runner.go:211] docker network inspect addons-191833 returned with exit code 1
	I0605 18:31:24.415757   14557 network_create.go:287] error running [docker network inspect addons-191833]: docker network inspect addons-191833: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-191833 not found
	I0605 18:31:24.415785   14557 network_create.go:289] output of [docker network inspect addons-191833]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-191833 not found
	
	** /stderr **
	I0605 18:31:24.415942   14557 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0605 18:31:24.431907   14557 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ca2e80}
	I0605 18:31:24.431949   14557 network_create.go:124] attempt to create docker network addons-191833 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0605 18:31:24.431992   14557 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-191833 addons-191833
	I0605 18:31:24.479282   14557 network_create.go:108] docker network addons-191833 192.168.49.0/24 created
	I0605 18:31:24.479311   14557 kic.go:121] calculated static IP "192.168.49.2" for the "addons-191833" container
	I0605 18:31:24.479361   14557 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0605 18:31:24.495602   14557 cli_runner.go:164] Run: docker volume create addons-191833 --label name.minikube.sigs.k8s.io=addons-191833 --label created_by.minikube.sigs.k8s.io=true
	I0605 18:31:24.512094   14557 oci.go:103] Successfully created a docker volume addons-191833
	I0605 18:31:24.512155   14557 cli_runner.go:164] Run: docker run --rm --name addons-191833-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-191833 --entrypoint /usr/bin/test -v addons-191833:/var gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b -d /var/lib
	I0605 18:31:31.639116   14557 cli_runner.go:217] Completed: docker run --rm --name addons-191833-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-191833 --entrypoint /usr/bin/test -v addons-191833:/var gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b -d /var/lib: (7.126912971s)
	I0605 18:31:31.639144   14557 oci.go:107] Successfully prepared a docker volume addons-191833
	I0605 18:31:31.639187   14557 preload.go:131] Checking if preload exists for k8s version v1.33.1 and runtime docker
	I0605 18:31:31.639210   14557 kic.go:194] Starting extracting preloaded images to volume ...
	I0605 18:31:31.639285   14557 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20889-6302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-191833:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b -I lz4 -xf /preloaded.tar -C /extractDir
	I0605 18:31:35.675184   14557 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20889-6302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-191833:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b -I lz4 -xf /preloaded.tar -C /extractDir: (4.03581421s)
	I0605 18:31:35.675219   14557 kic.go:203] duration metric: took 4.036005523s to extract preloaded images to volume ...
	W0605 18:31:35.675379   14557 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0605 18:31:35.675591   14557 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0605 18:31:35.722753   14557 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-191833 --name addons-191833 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-191833 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-191833 --network addons-191833 --ip 192.168.49.2 --volume addons-191833:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b
	I0605 18:31:36.001120   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Running}}
	I0605 18:31:36.019280   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:36.038112   14557 cli_runner.go:164] Run: docker exec addons-191833 stat /var/lib/dpkg/alternatives/iptables
	I0605 18:31:36.076367   14557 oci.go:144] the created container "addons-191833" has a running status.
	I0605 18:31:36.076398   14557 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa...
	I0605 18:31:36.270914   14557 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0605 18:31:36.295006   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:36.312730   14557 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0605 18:31:36.312751   14557 kic_runner.go:114] Args: [docker exec --privileged addons-191833 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0605 18:31:36.430168   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:36.450165   14557 machine.go:93] provisionDockerMachine start ...
	I0605 18:31:36.450266   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:36.476812   14557 main.go:141] libmachine: Using SSH client type: native
	I0605 18:31:36.477185   14557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0605 18:31:36.477199   14557 main.go:141] libmachine: About to run SSH command:
	hostname
	I0605 18:31:36.670127   14557 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-191833
	
	I0605 18:31:36.670197   14557 ubuntu.go:169] provisioning hostname "addons-191833"
	I0605 18:31:36.670261   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:36.688220   14557 main.go:141] libmachine: Using SSH client type: native
	I0605 18:31:36.688441   14557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0605 18:31:36.688457   14557 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-191833 && echo "addons-191833" | sudo tee /etc/hostname
	I0605 18:31:36.821184   14557 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-191833
	
	I0605 18:31:36.821257   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:36.838433   14557 main.go:141] libmachine: Using SSH client type: native
	I0605 18:31:36.838651   14557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0605 18:31:36.838669   14557 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-191833' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-191833/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-191833' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0605 18:31:36.958900   14557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0605 18:31:36.958930   14557 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20889-6302/.minikube CaCertPath:/home/jenkins/minikube-integration/20889-6302/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20889-6302/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20889-6302/.minikube}
	I0605 18:31:36.958963   14557 ubuntu.go:177] setting up certificates
	I0605 18:31:36.958977   14557 provision.go:84] configureAuth start
	I0605 18:31:36.959041   14557 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-191833
	I0605 18:31:36.974697   14557 provision.go:143] copyHostCerts
	I0605 18:31:36.974757   14557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20889-6302/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20889-6302/.minikube/ca.pem (1078 bytes)
	I0605 18:31:36.974855   14557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20889-6302/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20889-6302/.minikube/cert.pem (1123 bytes)
	I0605 18:31:36.974908   14557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20889-6302/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20889-6302/.minikube/key.pem (1675 bytes)
	I0605 18:31:36.974953   14557 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20889-6302/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20889-6302/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20889-6302/.minikube/certs/ca-key.pem org=jenkins.addons-191833 san=[127.0.0.1 192.168.49.2 addons-191833 localhost minikube]
	I0605 18:31:37.167086   14557 provision.go:177] copyRemoteCerts
	I0605 18:31:37.167137   14557 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0605 18:31:37.167191   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:37.183076   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:31:37.275322   14557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20889-6302/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0605 18:31:37.295815   14557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20889-6302/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0605 18:31:37.316353   14557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20889-6302/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0605 18:31:37.336469   14557 provision.go:87] duration metric: took 377.473021ms to configureAuth
	I0605 18:31:37.336498   14557 ubuntu.go:193] setting minikube options for container-runtime
	I0605 18:31:37.336654   14557 config.go:182] Loaded profile config "addons-191833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
	I0605 18:31:37.336698   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:37.352654   14557 main.go:141] libmachine: Using SSH client type: native
	I0605 18:31:37.352918   14557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0605 18:31:37.352940   14557 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0605 18:31:37.475422   14557 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0605 18:31:37.475449   14557 ubuntu.go:71] root file system type: overlay
	I0605 18:31:37.475576   14557 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0605 18:31:37.475637   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:37.491929   14557 main.go:141] libmachine: Using SSH client type: native
	I0605 18:31:37.492152   14557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0605 18:31:37.492248   14557 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0605 18:31:37.625200   14557 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0605 18:31:37.625262   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:37.643501   14557 main.go:141] libmachine: Using SSH client type: native
	I0605 18:31:37.643750   14557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0605 18:31:37.643770   14557 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0605 18:31:38.334663   14557 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-04-18 09:50:48.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-06-05 18:31:37.620563128 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0605 18:31:38.334691   14557 machine.go:96] duration metric: took 1.884503919s to provisionDockerMachine
	I0605 18:31:38.334701   14557 client.go:171] duration metric: took 14.506393332s to LocalClient.Create
	I0605 18:31:38.334719   14557 start.go:167] duration metric: took 14.506444561s to libmachine.API.Create "addons-191833"
	I0605 18:31:38.334728   14557 start.go:293] postStartSetup for "addons-191833" (driver="docker")
	I0605 18:31:38.334743   14557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0605 18:31:38.334806   14557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0605 18:31:38.334856   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:38.350663   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:31:38.439309   14557 ssh_runner.go:195] Run: cat /etc/os-release
	I0605 18:31:38.442217   14557 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0605 18:31:38.442246   14557 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0605 18:31:38.442253   14557 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0605 18:31:38.442260   14557 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0605 18:31:38.442269   14557 filesync.go:126] Scanning /home/jenkins/minikube-integration/20889-6302/.minikube/addons for local assets ...
	I0605 18:31:38.442334   14557 filesync.go:126] Scanning /home/jenkins/minikube-integration/20889-6302/.minikube/files for local assets ...
	I0605 18:31:38.442360   14557 start.go:296] duration metric: took 107.621837ms for postStartSetup
	I0605 18:31:38.442703   14557 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-191833
	I0605 18:31:38.459638   14557 profile.go:143] Saving config to /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/config.json ...
	I0605 18:31:38.459872   14557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0605 18:31:38.459907   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:38.475496   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:31:38.563448   14557 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0605 18:31:38.567141   14557 start.go:128] duration metric: took 14.740685932s to createHost
	I0605 18:31:38.567177   14557 start.go:83] releasing machines lock for "addons-191833", held for 14.740849048s
	I0605 18:31:38.567239   14557 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-191833
	I0605 18:31:38.583173   14557 ssh_runner.go:195] Run: cat /version.json
	I0605 18:31:38.583230   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:38.583238   14557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0605 18:31:38.583336   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:38.599692   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:31:38.600915   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:31:38.756888   14557 ssh_runner.go:195] Run: systemctl --version
	I0605 18:31:38.760719   14557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0605 18:31:38.764270   14557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0605 18:31:38.785752   14557 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0605 18:31:38.785808   14557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0605 18:31:38.809490   14557 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0605 18:31:38.809521   14557 start.go:495] detecting cgroup driver to use...
	I0605 18:31:38.809555   14557 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0605 18:31:38.809658   14557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0605 18:31:38.823614   14557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0605 18:31:38.832000   14557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0605 18:31:38.840172   14557 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0605 18:31:38.840240   14557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0605 18:31:38.848691   14557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0605 18:31:38.856818   14557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0605 18:31:38.864742   14557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0605 18:31:38.872856   14557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0605 18:31:38.880260   14557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0605 18:31:38.888261   14557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0605 18:31:38.896628   14557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0605 18:31:38.904921   14557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0605 18:31:38.912002   14557 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0605 18:31:38.912044   14557 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0605 18:31:38.924430   14557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0605 18:31:38.931810   14557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0605 18:31:39.002984   14557 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0605 18:31:39.094170   14557 start.go:495] detecting cgroup driver to use...
	I0605 18:31:39.094216   14557 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0605 18:31:39.094261   14557 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0605 18:31:39.105427   14557 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0605 18:31:39.105487   14557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0605 18:31:39.115141   14557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0605 18:31:39.129515   14557 ssh_runner.go:195] Run: which cri-dockerd
	I0605 18:31:39.132549   14557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0605 18:31:39.141302   14557 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0605 18:31:39.158168   14557 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0605 18:31:39.250838   14557 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0605 18:31:39.345259   14557 docker.go:587] configuring docker to use "cgroupfs" as cgroup driver...
	I0605 18:31:39.345361   14557 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0605 18:31:39.361611   14557 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0605 18:31:39.371534   14557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0605 18:31:39.447339   14557 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0605 18:31:39.718220   14557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0605 18:31:39.728403   14557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0605 18:31:39.738272   14557 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0605 18:31:39.819925   14557 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0605 18:31:39.891787   14557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0605 18:31:39.968372   14557 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0605 18:31:39.980606   14557 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0605 18:31:39.989902   14557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0605 18:31:40.059940   14557 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0605 18:31:40.114072   14557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0605 18:31:40.123593   14557 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0605 18:31:40.123651   14557 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0605 18:31:40.126557   14557 start.go:563] Will wait 60s for crictl version
	I0605 18:31:40.126598   14557 ssh_runner.go:195] Run: which crictl
	I0605 18:31:40.129280   14557 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0605 18:31:40.158516   14557 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.1.1
	RuntimeApiVersion:  v1
	I0605 18:31:40.158579   14557 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0605 18:31:40.181203   14557 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0605 18:31:40.206247   14557 out.go:235] * Preparing Kubernetes v1.33.1 on Docker 28.1.1 ...
	I0605 18:31:40.206314   14557 cli_runner.go:164] Run: docker network inspect addons-191833 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0605 18:31:40.223260   14557 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0605 18:31:40.226483   14557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0605 18:31:40.235770   14557 kubeadm.go:875] updating cluster {Name:addons-191833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.1 ClusterName:addons-191833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.33.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0605 18:31:40.235874   14557 preload.go:131] Checking if preload exists for k8s version v1.33.1 and runtime docker
	I0605 18:31:40.235929   14557 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0605 18:31:40.253460   14557 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.33.1
	registry.k8s.io/kube-controller-manager:v1.33.1
	registry.k8s.io/kube-scheduler:v1.33.1
	registry.k8s.io/kube-proxy:v1.33.1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0605 18:31:40.253482   14557 docker.go:633] Images already preloaded, skipping extraction
	I0605 18:31:40.253533   14557 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0605 18:31:40.270717   14557 docker.go:703] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.33.1
	registry.k8s.io/kube-scheduler:v1.33.1
	registry.k8s.io/kube-apiserver:v1.33.1
	registry.k8s.io/kube-proxy:v1.33.1
	registry.k8s.io/etcd:3.5.21-0
	registry.k8s.io/coredns/coredns:v1.12.0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0605 18:31:40.270741   14557 cache_images.go:84] Images are preloaded, skipping loading
	I0605 18:31:40.270749   14557 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.33.1 docker true true} ...
	I0605 18:31:40.270823   14557 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-191833 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.1 ClusterName:addons-191833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0605 18:31:40.270867   14557 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0605 18:31:40.310641   14557 cni.go:84] Creating CNI manager for ""
	I0605 18:31:40.310690   14557 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0605 18:31:40.310702   14557 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0605 18:31:40.310720   14557 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.33.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-191833 NodeName:addons-191833 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0605 18:31:40.310847   14557 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-191833"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0605 18:31:40.310902   14557 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.1
	I0605 18:31:40.318643   14557 binaries.go:44] Found k8s binaries, skipping transfer
	I0605 18:31:40.318701   14557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0605 18:31:40.326001   14557 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0605 18:31:40.340952   14557 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0605 18:31:40.355798   14557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0605 18:31:40.370769   14557 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0605 18:31:40.373791   14557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0605 18:31:40.384127   14557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0605 18:31:40.457118   14557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0605 18:31:40.469102   14557 certs.go:68] Setting up /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833 for IP: 192.168.49.2
	I0605 18:31:40.469125   14557 certs.go:194] generating shared ca certs ...
	I0605 18:31:40.469142   14557 certs.go:226] acquiring lock for ca certs: {Name:mk4210a02a88a0df8e3a10633b4ac50ade68dfa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 18:31:40.469256   14557 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20889-6302/.minikube/ca.key
	I0605 18:31:40.717495   14557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20889-6302/.minikube/ca.crt ...
	I0605 18:31:40.717526   14557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20889-6302/.minikube/ca.crt: {Name:mkbbba5d3764a9daf1ba9ce180bba88708d140c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 18:31:40.717704   14557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20889-6302/.minikube/ca.key ...
	I0605 18:31:40.717716   14557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20889-6302/.minikube/ca.key: {Name:mk48346d123b0556bcc9c7bb4699f9e41a10ebaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 18:31:40.717796   14557 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20889-6302/.minikube/proxy-client-ca.key
	I0605 18:31:40.856474   14557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20889-6302/.minikube/proxy-client-ca.crt ...
	I0605 18:31:40.856505   14557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20889-6302/.minikube/proxy-client-ca.crt: {Name:mk984bac25818ec7d14f8f905a3fb446be388d1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 18:31:40.856667   14557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20889-6302/.minikube/proxy-client-ca.key ...
	I0605 18:31:40.856678   14557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20889-6302/.minikube/proxy-client-ca.key: {Name:mk79ddd7e89cb7eb1dff75d25cd589aee88087ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 18:31:40.856749   14557 certs.go:256] generating profile certs ...
	I0605 18:31:40.856801   14557 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.key
	I0605 18:31:40.856821   14557 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt with IP's: []
	I0605 18:31:41.111479   14557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt ...
	I0605 18:31:41.111513   14557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: {Name:mked26f3ba6f9f66dbf8c0ddb644b76ac9274e7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 18:31:41.111714   14557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.key ...
	I0605 18:31:41.111728   14557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.key: {Name:mk7f83b6b2ba95b7b3a1290d2853eac23ee39fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 18:31:41.111800   14557 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/apiserver.key.036d0873
	I0605 18:31:41.111819   14557 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/apiserver.crt.036d0873 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0605 18:31:41.709409   14557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/apiserver.crt.036d0873 ...
	I0605 18:31:41.709445   14557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/apiserver.crt.036d0873: {Name:mka943a6f7ee69934388f0f5f1d2e6a2ca91c3ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 18:31:41.709626   14557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/apiserver.key.036d0873 ...
	I0605 18:31:41.709640   14557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/apiserver.key.036d0873: {Name:mk4cd2e965a6dd5c8b103d137237e691aa53c578 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 18:31:41.709719   14557 certs.go:381] copying /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/apiserver.crt.036d0873 -> /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/apiserver.crt
	I0605 18:31:41.709831   14557 certs.go:385] copying /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/apiserver.key.036d0873 -> /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/apiserver.key
	I0605 18:31:41.709891   14557 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/proxy-client.key
	I0605 18:31:41.709911   14557 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/proxy-client.crt with IP's: []
	I0605 18:31:42.201048   14557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/proxy-client.crt ...
	I0605 18:31:42.201079   14557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/proxy-client.crt: {Name:mka3bec4db81652b9268112bbbcff9ba61af0631 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 18:31:42.201227   14557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/proxy-client.key ...
	I0605 18:31:42.201237   14557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/proxy-client.key: {Name:mk1997cae5a00b3670ed7d438aeaaa46af7013d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 18:31:42.201401   14557 certs.go:484] found cert: /home/jenkins/minikube-integration/20889-6302/.minikube/certs/ca-key.pem (1679 bytes)
	I0605 18:31:42.201440   14557 certs.go:484] found cert: /home/jenkins/minikube-integration/20889-6302/.minikube/certs/ca.pem (1078 bytes)
	I0605 18:31:42.201466   14557 certs.go:484] found cert: /home/jenkins/minikube-integration/20889-6302/.minikube/certs/cert.pem (1123 bytes)
	I0605 18:31:42.201491   14557 certs.go:484] found cert: /home/jenkins/minikube-integration/20889-6302/.minikube/certs/key.pem (1675 bytes)
	I0605 18:31:42.202011   14557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20889-6302/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0605 18:31:42.224125   14557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20889-6302/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0605 18:31:42.245116   14557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20889-6302/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0605 18:31:42.265498   14557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20889-6302/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0605 18:31:42.286070   14557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0605 18:31:42.307191   14557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0605 18:31:42.327480   14557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0605 18:31:42.347725   14557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0605 18:31:42.368301   14557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20889-6302/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0605 18:31:42.388940   14557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0605 18:31:42.403559   14557 ssh_runner.go:195] Run: openssl version
	I0605 18:31:42.408106   14557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0605 18:31:42.415836   14557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0605 18:31:42.418680   14557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  5 18:31 /usr/share/ca-certificates/minikubeCA.pem
	I0605 18:31:42.418722   14557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0605 18:31:42.424511   14557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0605 18:31:42.432409   14557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0605 18:31:42.435307   14557 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0605 18:31:42.435366   14557 kubeadm.go:392] StartCluster: {Name:addons-191833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.1 ClusterName:addons-191833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.33.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0605 18:31:42.435456   14557 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0605 18:31:42.452022   14557 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0605 18:31:42.460285   14557 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0605 18:31:42.467907   14557 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0605 18:31:42.467950   14557 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0605 18:31:42.475498   14557 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0605 18:31:42.475523   14557 kubeadm.go:157] found existing configuration files:
	
	I0605 18:31:42.475565   14557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0605 18:31:42.483271   14557 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0605 18:31:42.483323   14557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0605 18:31:42.491020   14557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0605 18:31:42.498881   14557 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0605 18:31:42.498942   14557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0605 18:31:42.506387   14557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0605 18:31:42.513305   14557 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0605 18:31:42.513357   14557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0605 18:31:42.520226   14557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0605 18:31:42.527165   14557 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0605 18:31:42.527207   14557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0605 18:31:42.533887   14557 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0605 18:31:42.568157   14557 kubeadm.go:310] [init] Using Kubernetes version: v1.33.1
	I0605 18:31:42.568396   14557 kubeadm.go:310] [preflight] Running pre-flight checks
	I0605 18:31:42.587473   14557 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0605 18:31:42.587565   14557 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0605 18:31:42.587615   14557 kubeadm.go:310] OS: Linux
	I0605 18:31:42.587666   14557 kubeadm.go:310] CGROUPS_CPU: enabled
	I0605 18:31:42.587735   14557 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0605 18:31:42.587822   14557 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0605 18:31:42.587895   14557 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0605 18:31:42.587974   14557 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0605 18:31:42.588051   14557 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0605 18:31:42.588117   14557 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0605 18:31:42.588183   14557 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0605 18:31:42.588263   14557 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0605 18:31:42.634418   14557 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0605 18:31:42.634550   14557 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0605 18:31:42.634693   14557 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0605 18:31:42.644095   14557 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0605 18:31:42.646412   14557 out.go:235]   - Generating certificates and keys ...
	I0605 18:31:42.646505   14557 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0605 18:31:42.646581   14557 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0605 18:31:42.926542   14557 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0605 18:31:43.257420   14557 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0605 18:31:43.385333   14557 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0605 18:31:43.452142   14557 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0605 18:31:43.593626   14557 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0605 18:31:43.593774   14557 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-191833 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0605 18:31:43.932359   14557 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0605 18:31:43.932506   14557 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-191833 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0605 18:31:43.976025   14557 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0605 18:31:44.102077   14557 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0605 18:31:44.303098   14557 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0605 18:31:44.303226   14557 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0605 18:31:44.483634   14557 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0605 18:31:44.649434   14557 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0605 18:31:44.689011   14557 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0605 18:31:45.035942   14557 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0605 18:31:45.211997   14557 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0605 18:31:45.212511   14557 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0605 18:31:45.214642   14557 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0605 18:31:45.217019   14557 out.go:235]   - Booting up control plane ...
	I0605 18:31:45.217135   14557 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0605 18:31:45.217218   14557 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0605 18:31:45.217303   14557 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0605 18:31:45.225305   14557 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0605 18:31:45.230223   14557 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0605 18:31:45.230286   14557 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0605 18:31:45.310114   14557 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0605 18:31:45.310215   14557 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0605 18:31:45.811552   14557 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.584613ms
	I0605 18:31:45.826088   14557 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0605 18:31:45.826209   14557 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0605 18:31:45.826301   14557 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0605 18:31:45.826387   14557 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0605 18:31:48.888822   14557 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.062775314s
	I0605 18:31:49.426228   14557 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.600162251s
	I0605 18:31:50.827747   14557 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.001718206s
	I0605 18:31:50.839135   14557 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0605 18:31:50.847218   14557 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0605 18:31:50.864067   14557 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0605 18:31:50.864254   14557 kubeadm.go:310] [mark-control-plane] Marking the node addons-191833 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0605 18:31:50.871718   14557 kubeadm.go:310] [bootstrap-token] Using token: qdeoep.gg95x4z405b4aafx
	I0605 18:31:50.873145   14557 out.go:235]   - Configuring RBAC rules ...
	I0605 18:31:50.873297   14557 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0605 18:31:50.877409   14557 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0605 18:31:50.882345   14557 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0605 18:31:50.885321   14557 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0605 18:31:50.887421   14557 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0605 18:31:50.889431   14557 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0605 18:31:51.233658   14557 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0605 18:31:51.729700   14557 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0605 18:31:52.234216   14557 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0605 18:31:52.235049   14557 kubeadm.go:310] 
	I0605 18:31:52.235189   14557 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0605 18:31:52.235210   14557 kubeadm.go:310] 
	I0605 18:31:52.235312   14557 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0605 18:31:52.235324   14557 kubeadm.go:310] 
	I0605 18:31:52.235357   14557 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0605 18:31:52.235439   14557 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0605 18:31:52.235527   14557 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0605 18:31:52.235538   14557 kubeadm.go:310] 
	I0605 18:31:52.235617   14557 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0605 18:31:52.235627   14557 kubeadm.go:310] 
	I0605 18:31:52.235697   14557 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0605 18:31:52.235708   14557 kubeadm.go:310] 
	I0605 18:31:52.235784   14557 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0605 18:31:52.235899   14557 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0605 18:31:52.236008   14557 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0605 18:31:52.236019   14557 kubeadm.go:310] 
	I0605 18:31:52.236129   14557 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0605 18:31:52.236264   14557 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0605 18:31:52.236283   14557 kubeadm.go:310] 
	I0605 18:31:52.236371   14557 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qdeoep.gg95x4z405b4aafx \
	I0605 18:31:52.236509   14557 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b9faae6cd72bc3fb95c57f78271f2d5e0b44f234761362416ede80c96f973f1d \
	I0605 18:31:52.236530   14557 kubeadm.go:310] 	--control-plane 
	I0605 18:31:52.236537   14557 kubeadm.go:310] 
	I0605 18:31:52.236665   14557 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0605 18:31:52.236675   14557 kubeadm.go:310] 
	I0605 18:31:52.236772   14557 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qdeoep.gg95x4z405b4aafx \
	I0605 18:31:52.236928   14557 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b9faae6cd72bc3fb95c57f78271f2d5e0b44f234761362416ede80c96f973f1d 
	I0605 18:31:52.238109   14557 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0605 18:31:52.238412   14557 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0605 18:31:52.238572   14557 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0605 18:31:52.238595   14557 cni.go:84] Creating CNI manager for ""
	I0605 18:31:52.238632   14557 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0605 18:31:52.240233   14557 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0605 18:31:52.241389   14557 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0605 18:31:52.249624   14557 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0605 18:31:52.265090   14557 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0605 18:31:52.265158   14557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 18:31:52.265204   14557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-191833 minikube.k8s.io/updated_at=2025_06_05T18_31_52_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=01246adb8f85a16e6fd2bbeecb0ebb43de6563df minikube.k8s.io/name=addons-191833 minikube.k8s.io/primary=true
	I0605 18:31:52.272131   14557 ops.go:34] apiserver oom_adj: -16
	I0605 18:31:52.334183   14557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 18:31:52.834875   14557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 18:31:53.334758   14557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 18:31:53.834621   14557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 18:31:54.334456   14557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 18:31:54.834426   14557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 18:31:55.334665   14557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 18:31:55.834620   14557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 18:31:56.334341   14557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 18:31:56.835020   14557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 18:31:57.334518   14557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 18:31:57.391216   14557 kubeadm.go:1105] duration metric: took 5.126112258s to wait for elevateKubeSystemPrivileges
	I0605 18:31:57.391255   14557 kubeadm.go:394] duration metric: took 14.955908373s to StartCluster
	I0605 18:31:57.391273   14557 settings.go:142] acquiring lock: {Name:mk6c5fac7f1b8175f7e3d6e37e98c479b3205c3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 18:31:57.391374   14557 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20889-6302/kubeconfig
	I0605 18:31:57.391771   14557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20889-6302/kubeconfig: {Name:mka6a02d89afb936e1c90c28e18ad72c5d05fc87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 18:31:57.391975   14557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0605 18:31:57.391980   14557 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.33.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0605 18:31:57.392041   14557 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0605 18:31:57.392215   14557 addons.go:69] Setting yakd=true in profile "addons-191833"
	I0605 18:31:57.392222   14557 addons.go:69] Setting metrics-server=true in profile "addons-191833"
	I0605 18:31:57.392228   14557 config.go:182] Loaded profile config "addons-191833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
	I0605 18:31:57.392254   14557 addons.go:238] Setting addon yakd=true in "addons-191833"
	I0605 18:31:57.392234   14557 addons.go:69] Setting default-storageclass=true in profile "addons-191833"
	I0605 18:31:57.392264   14557 addons.go:238] Setting addon metrics-server=true in "addons-191833"
	I0605 18:31:57.392280   14557 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-191833"
	I0605 18:31:57.392288   14557 host.go:66] Checking if "addons-191833" exists ...
	I0605 18:31:57.392293   14557 host.go:66] Checking if "addons-191833" exists ...
	I0605 18:31:57.392293   14557 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-191833"
	I0605 18:31:57.392294   14557 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-191833"
	I0605 18:31:57.392281   14557 addons.go:69] Setting cloud-spanner=true in profile "addons-191833"
	I0605 18:31:57.392293   14557 addons.go:69] Setting storage-provisioner=true in profile "addons-191833"
	I0605 18:31:57.392316   14557 host.go:66] Checking if "addons-191833" exists ...
	I0605 18:31:57.392321   14557 addons.go:238] Setting addon cloud-spanner=true in "addons-191833"
	I0605 18:31:57.392341   14557 addons.go:238] Setting addon storage-provisioner=true in "addons-191833"
	I0605 18:31:57.392369   14557 host.go:66] Checking if "addons-191833" exists ...
	I0605 18:31:57.392392   14557 host.go:66] Checking if "addons-191833" exists ...
	I0605 18:31:57.392881   14557 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-191833"
	I0605 18:31:57.392908   14557 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-191833"
	I0605 18:31:57.392935   14557 host.go:66] Checking if "addons-191833" exists ...
	I0605 18:31:57.392935   14557 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-191833"
	I0605 18:31:57.393015   14557 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-191833"
	I0605 18:31:57.393098   14557 addons.go:69] Setting registry=true in profile "addons-191833"
	I0605 18:31:57.393122   14557 addons.go:238] Setting addon registry=true in "addons-191833"
	I0605 18:31:57.393131   14557 addons.go:69] Setting volcano=true in profile "addons-191833"
	I0605 18:31:57.393147   14557 host.go:66] Checking if "addons-191833" exists ...
	I0605 18:31:57.393146   14557 addons.go:238] Setting addon volcano=true in "addons-191833"
	I0605 18:31:57.393174   14557 host.go:66] Checking if "addons-191833" exists ...
	I0605 18:31:57.393423   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:57.393514   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:57.393684   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:57.393724   14557 addons.go:69] Setting volumesnapshots=true in profile "addons-191833"
	I0605 18:31:57.393742   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:57.393766   14557 host.go:66] Checking if "addons-191833" exists ...
	I0605 18:31:57.394368   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:57.394850   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:57.393742   14557 addons.go:238] Setting addon volumesnapshots=true in "addons-191833"
	I0605 18:31:57.395146   14557 host.go:66] Checking if "addons-191833" exists ...
	I0605 18:31:57.395806   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:57.395975   14557 addons.go:69] Setting registry-creds=true in profile "addons-191833"
	I0605 18:31:57.396011   14557 addons.go:238] Setting addon registry-creds=true in "addons-191833"
	I0605 18:31:57.396104   14557 host.go:66] Checking if "addons-191833" exists ...
	I0605 18:31:57.397069   14557 addons.go:69] Setting ingress=true in profile "addons-191833"
	I0605 18:31:57.397132   14557 addons.go:238] Setting addon ingress=true in "addons-191833"
	I0605 18:31:57.397177   14557 host.go:66] Checking if "addons-191833" exists ...
	I0605 18:31:57.397201   14557 addons.go:69] Setting inspektor-gadget=true in profile "addons-191833"
	I0605 18:31:57.397253   14557 addons.go:238] Setting addon inspektor-gadget=true in "addons-191833"
	I0605 18:31:57.397301   14557 host.go:66] Checking if "addons-191833" exists ...
	I0605 18:31:57.397266   14557 addons.go:69] Setting ingress-dns=true in profile "addons-191833"
	I0605 18:31:57.397618   14557 addons.go:238] Setting addon ingress-dns=true in "addons-191833"
	I0605 18:31:57.397678   14557 host.go:66] Checking if "addons-191833" exists ...
	I0605 18:31:57.397781   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:57.398092   14557 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-191833"
	I0605 18:31:57.398127   14557 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-191833"
	I0605 18:31:57.398381   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:57.398468   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:57.397181   14557 addons.go:69] Setting gcp-auth=true in profile "addons-191833"
	I0605 18:31:57.399263   14557 mustload.go:65] Loading cluster: addons-191833
	I0605 18:31:57.398848   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:57.401168   14557 config.go:182] Loaded profile config "addons-191833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
	I0605 18:31:57.401601   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:57.403038   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:57.408286   14557 out.go:177] * Verifying Kubernetes components...
	I0605 18:31:57.410346   14557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0605 18:31:57.423778   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:57.423876   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:57.426032   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:57.427203   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:57.440850   14557 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.34
	I0605 18:31:57.440903   14557 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0605 18:31:57.443735   14557 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0605 18:31:57.443756   14557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0605 18:31:57.443813   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:57.444824   14557 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0605 18:31:57.444842   14557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0605 18:31:57.444895   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:57.451708   14557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0605 18:31:57.451841   14557 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0605 18:31:57.451878   14557 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.12.1
	I0605 18:31:57.456377   14557 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0605 18:31:57.456412   14557 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0605 18:31:57.456489   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:57.456706   14557 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.12.1
	I0605 18:31:57.458397   14557 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0605 18:31:57.465193   14557 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0605 18:31:57.465307   14557 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.12.1
	I0605 18:31:57.465745   14557 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0605 18:31:57.466335   14557 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0605 18:31:57.468791   14557 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0605 18:31:57.468814   14557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (498072 bytes)
	I0605 18:31:57.468870   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:57.470281   14557 out.go:177]   - Using image docker.io/registry:3.0.0
	I0605 18:31:57.470383   14557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0605 18:31:57.470468   14557 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0605 18:31:57.470486   14557 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0605 18:31:57.470559   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:57.472411   14557 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0605 18:31:57.472432   14557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0605 18:31:57.472485   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:57.476580   14557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0605 18:31:57.484307   14557 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0605 18:31:57.484307   14557 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0605 18:31:57.491206   14557 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.12.2
	I0605 18:31:57.492573   14557 out.go:177]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0605 18:31:57.492691   14557 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0605 18:31:57.503463   14557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0605 18:31:57.504092   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:57.492711   14557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0605 18:31:57.492731   14557 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0605 18:31:57.495136   14557 addons.go:238] Setting addon default-storageclass=true in "addons-191833"
	I0605 18:31:57.502899   14557 host.go:66] Checking if "addons-191833" exists ...
	I0605 18:31:57.505079   14557 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0605 18:31:57.505099   14557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0605 18:31:57.505148   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:57.506026   14557 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0605 18:31:57.506074   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:57.509332   14557 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.3
	I0605 18:31:57.509678   14557 host.go:66] Checking if "addons-191833" exists ...
	I0605 18:31:57.509912   14557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0605 18:31:57.511243   14557 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.3
	I0605 18:31:57.511244   14557 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.2
	I0605 18:31:57.512483   14557 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0605 18:31:57.512496   14557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0605 18:31:57.512536   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:57.510245   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:57.512677   14557 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0605 18:31:57.512689   14557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0605 18:31:57.512728   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:57.515295   14557 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-191833"
	I0605 18:31:57.515342   14557 host.go:66] Checking if "addons-191833" exists ...
	I0605 18:31:57.515778   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:31:57.516018   14557 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0605 18:31:57.516086   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:31:57.517124   14557 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0605 18:31:57.517144   14557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0605 18:31:57.517194   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:57.517448   14557 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.41.0
	I0605 18:31:57.517526   14557 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0605 18:31:57.517686   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:31:57.518707   14557 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0605 18:31:57.518721   14557 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0605 18:31:57.518760   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:57.518798   14557 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0605 18:31:57.518808   14557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0605 18:31:57.518851   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:57.524705   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:31:57.540355   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:31:57.548405   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:31:57.549187   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:31:57.561364   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:31:57.564694   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:31:57.572301   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:31:57.577467   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:31:57.579064   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:31:57.579900   14557 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0605 18:31:57.581313   14557 out.go:177]   - Using image docker.io/busybox:stable
	I0605 18:31:57.582598   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:31:57.582998   14557 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0605 18:31:57.583014   14557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0605 18:31:57.583050   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:57.588529   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:31:57.588770   14557 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0605 18:31:57.588784   14557 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0605 18:31:57.588828   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:31:57.593350   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:31:57.600235   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:31:57.605989   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	W0605 18:31:57.636099   14557 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0605 18:31:57.636138   14557 retry.go:31] will retry after 136.830753ms: ssh: handshake failed: EOF
	W0605 18:31:57.637041   14557 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0605 18:31:57.637064   14557 retry.go:31] will retry after 227.039785ms: ssh: handshake failed: EOF
	I0605 18:31:57.828513   14557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0605 18:31:57.828639   14557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0605 18:31:58.024774   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0605 18:31:58.045322   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0605 18:31:58.226631   14557 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0605 18:31:58.226665   14557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0605 18:31:58.236674   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0605 18:31:58.238930   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0605 18:31:58.239228   14557 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0605 18:31:58.239278   14557 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0605 18:31:58.244519   14557 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0605 18:31:58.244543   14557 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0605 18:31:58.246698   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0605 18:31:58.247516   14557 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0605 18:31:58.247574   14557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14737 bytes)
	I0605 18:31:58.425009   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0605 18:31:58.426059   14557 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0605 18:31:58.426095   14557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0605 18:31:58.435185   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0605 18:31:58.443365   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0605 18:31:58.523863   14557 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0605 18:31:58.523968   14557 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0605 18:31:58.528355   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0605 18:31:58.532342   14557 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0605 18:31:58.532388   14557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0605 18:31:58.532722   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0605 18:31:58.547311   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0605 18:31:58.625639   14557 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0605 18:31:58.625678   14557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0605 18:31:58.626370   14557 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0605 18:31:58.626482   14557 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0605 18:31:58.649262   14557 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0605 18:31:58.649336   14557 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0605 18:31:58.737291   14557 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0605 18:31:58.737338   14557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0605 18:31:59.125318   14557 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0605 18:31:59.125378   14557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0605 18:31:59.131724   14557 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0605 18:31:59.131751   14557 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0605 18:31:59.140481   14557 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0605 18:31:59.140513   14557 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0605 18:31:59.240507   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0605 18:31:59.336829   14557 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0605 18:31:59.336871   14557 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0605 18:31:59.531750   14557 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0605 18:31:59.531777   14557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0605 18:31:59.543582   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0605 18:31:59.726601   14557 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0605 18:31:59.726636   14557 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0605 18:32:00.238604   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0605 18:32:00.433119   14557 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.604429246s)
	I0605 18:32:00.434215   14557 node_ready.go:35] waiting up to 6m0s for node "addons-191833" to be "Ready" ...
	I0605 18:32:00.434492   14557 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.605940676s)
	I0605 18:32:00.434514   14557 start.go:972] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0605 18:32:00.437388   14557 node_ready.go:49] node "addons-191833" is "Ready"
	I0605 18:32:00.437421   14557 node_ready.go:38] duration metric: took 3.173741ms for node "addons-191833" to be "Ready" ...
	I0605 18:32:00.437456   14557 api_server.go:52] waiting for apiserver process to appear ...
	I0605 18:32:00.437523   14557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0605 18:32:00.631037   14557 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0605 18:32:00.631069   14557 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0605 18:32:00.640233   14557 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0605 18:32:00.640262   14557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0605 18:32:00.944700   14557 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-191833" context rescaled to 1 replicas
	I0605 18:32:01.144892   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.099472783s)
	I0605 18:32:01.144980   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.119909455s)
	I0605 18:32:01.445179   14557 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0605 18:32:01.445205   14557 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0605 18:32:01.825349   14557 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0605 18:32:01.825458   14557 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0605 18:32:02.141977   14557 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0605 18:32:02.142062   14557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0605 18:32:02.525511   14557 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0605 18:32:02.525615   14557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0605 18:32:02.625257   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0605 18:32:02.835331   14557 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0605 18:32:02.835422   14557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0605 18:32:03.128408   14557 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0605 18:32:03.128503   14557 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0605 18:32:03.425624   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0605 18:32:03.433551   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.196815742s)
	I0605 18:32:03.433607   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.194512292s)
	I0605 18:32:04.529103   14557 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0605 18:32:04.529188   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:32:04.557410   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:32:05.745434   14557 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0605 18:32:05.929390   14557 addons.go:238] Setting addon gcp-auth=true in "addons-191833"
	I0605 18:32:05.929463   14557 host.go:66] Checking if "addons-191833" exists ...
	I0605 18:32:05.929985   14557 cli_runner.go:164] Run: docker container inspect addons-191833 --format={{.State.Status}}
	I0605 18:32:05.955581   14557 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0605 18:32:05.955632   14557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191833
	I0605 18:32:05.972648   14557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/addons-191833/id_rsa Username:docker}
	I0605 18:32:10.624857   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (12.199804944s)
	I0605 18:32:10.624921   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (12.189711888s)
	I0605 18:32:10.625024   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (12.181637406s)
	I0605 18:32:10.625036   14557 addons.go:479] Verifying addon ingress=true in "addons-191833"
	I0605 18:32:10.625199   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (12.096808525s)
	I0605 18:32:10.625251   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (12.09251119s)
	I0605 18:32:10.625288   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (12.378543782s)
	I0605 18:32:10.625458   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.384919415s)
	I0605 18:32:10.625490   14557 addons.go:479] Verifying addon registry=true in "addons-191833"
	I0605 18:32:10.625516   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.081904571s)
	I0605 18:32:10.625541   14557 addons.go:479] Verifying addon metrics-server=true in "addons-191833"
	W0605 18:32:10.625497   14557 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system created
	namespace/volcano-monitoring created
	serviceaccount/volcano-admission created
	configmap/volcano-admission-configmap created
	clusterrole.rbac.authorization.k8s.io/volcano-admission created
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role created
	service/volcano-admission-service created
	deployment.apps/volcano-admission created
	serviceaccount/volcano-admission-init created
	role.rbac.authorization.k8s.io/volcano-admission-init created
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role created
	job.batch/volcano-admission-init created
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh created
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh created
	serviceaccount/volcano-controllers created
	configmap/volcano-controller-configmap created
	clusterrole.rbac.authorization.k8s.io/volcano-controllers created
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role created
	service/volcano-controllers-service created
	deployment.apps/volcano-controllers created
	serviceaccount/volcano-scheduler created
	configmap/volcano-scheduler-configmap created
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler created
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role created
	service/volcano-scheduler-service created
	deployment.apps/volcano-scheduler created
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh created
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh created
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh created
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate created
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate created
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate created
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate created
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate created
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate created
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh created
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh created
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:32:10.625580   14557 retry.go:31] will retry after 185.613442ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system created
	namespace/volcano-monitoring created
	serviceaccount/volcano-admission created
	configmap/volcano-admission-configmap created
	clusterrole.rbac.authorization.k8s.io/volcano-admission created
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role created
	service/volcano-admission-service created
	deployment.apps/volcano-admission created
	serviceaccount/volcano-admission-init created
	role.rbac.authorization.k8s.io/volcano-admission-init created
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role created
	job.batch/volcano-admission-init created
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh created
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh created
	serviceaccount/volcano-controllers created
	configmap/volcano-controller-configmap created
	clusterrole.rbac.authorization.k8s.io/volcano-controllers created
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role created
	service/volcano-controllers-service created
	deployment.apps/volcano-controllers created
	serviceaccount/volcano-scheduler created
	configmap/volcano-scheduler-configmap created
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler created
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role created
	service/volcano-scheduler-service created
	deployment.apps/volcano-scheduler created
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh created
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh created
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh created
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate created
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate created
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate created
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate created
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate created
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate created
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh created
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh created
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:32:10.625408   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (12.07804314s)
	I0605 18:32:10.625650   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.386961367s)
	I0605 18:32:10.625695   14557 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (10.188153323s)
	I0605 18:32:10.625717   14557 api_server.go:72] duration metric: took 13.233712902s to wait for apiserver process to appear ...
	I0605 18:32:10.625728   14557 api_server.go:88] waiting for apiserver healthz status ...
	I0605 18:32:10.625751   14557 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0605 18:32:10.625801   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.000503087s)
	W0605 18:32:10.625866   14557 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0605 18:32:10.625882   14557 retry.go:31] will retry after 220.371708ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0605 18:32:10.627060   14557 out.go:177] * Verifying ingress addon...
	I0605 18:32:10.628248   14557 out.go:177] * Verifying registry addon...
	I0605 18:32:10.628283   14557 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-191833 service yakd-dashboard -n yakd-dashboard
	
	I0605 18:32:10.630168   14557 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0605 18:32:10.631031   14557 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0605 18:32:10.632038   14557 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0605 18:32:10.633472   14557 api_server.go:141] control plane version: v1.33.1
	I0605 18:32:10.633498   14557 api_server.go:131] duration metric: took 7.756375ms to wait for apiserver health ...
	I0605 18:32:10.633509   14557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0605 18:32:10.636634   14557 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0605 18:32:10.636658   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:10.637057   14557 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0605 18:32:10.637074   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:10.642821   14557 system_pods.go:59] 18 kube-system pods found
	I0605 18:32:10.642876   14557 system_pods.go:61] "amd-gpu-device-plugin-kg7z2" [72a513ae-8981-48d4-ab13-b4d8f6e6efed] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0605 18:32:10.642887   14557 system_pods.go:61] "coredns-674b8bbfcf-g9jrw" [9421456f-39de-4eca-b36d-c44346a687f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0605 18:32:10.642898   14557 system_pods.go:61] "coredns-674b8bbfcf-lnwdt" [f1344d29-33f8-4394-8cbf-1c5f4f7a37f6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0605 18:32:10.642904   14557 system_pods.go:61] "csi-hostpath-attacher-0" [bbb3e18c-c4c8-44d5-aa98-f7d9de1cf4a9] Pending
	I0605 18:32:10.642909   14557 system_pods.go:61] "etcd-addons-191833" [9cdcb208-540e-492e-9639-7c5b7db2ce82] Running
	I0605 18:32:10.642915   14557 system_pods.go:61] "kube-apiserver-addons-191833" [9237f9c6-406a-470c-9a7f-8c26368af7e5] Running
	I0605 18:32:10.642920   14557 system_pods.go:61] "kube-controller-manager-addons-191833" [656668ad-bfcd-4abb-b6e8-24858521c220] Running
	I0605 18:32:10.642928   14557 system_pods.go:61] "kube-ingress-dns-minikube" [495e1c3e-27ca-4255-9f7f-8f6715337f59] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0605 18:32:10.642932   14557 system_pods.go:61] "kube-proxy-g2l9p" [4753a393-80f3-4543-bf47-fc080125b03b] Running
	I0605 18:32:10.642938   14557 system_pods.go:61] "kube-scheduler-addons-191833" [0b2b6c42-f781-44cb-82c3-fcfd1bd7ee2e] Running
	I0605 18:32:10.642945   14557 system_pods.go:61] "metrics-server-7fbb699795-5m64w" [96f6d999-ab98-4e15-81fd-bef3cfec2f61] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0605 18:32:10.642954   14557 system_pods.go:61] "nvidia-device-plugin-daemonset-6sz4x" [b05d034d-94e5-4669-a212-ca20ff817f1b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0605 18:32:10.642967   14557 system_pods.go:61] "registry-694bd45846-k78hm" [3fc306df-aca8-427f-9e3d-8f92e7b1cad6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0605 18:32:10.642981   14557 system_pods.go:61] "registry-creds-6b69cdcdd5-glqcp" [5e782afc-a232-44d7-801a-bbdd9acbe3bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0605 18:32:10.642989   14557 system_pods.go:61] "registry-proxy-mq24t" [ad022fb2-e887-4f8e-a153-c311bd3ff71b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0605 18:32:10.642997   14557 system_pods.go:61] "snapshot-controller-68b874b76f-24xzj" [284670af-0b6a-4461-b9b5-7fad71ddeda6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0605 18:32:10.643011   14557 system_pods.go:61] "snapshot-controller-68b874b76f-8khc5" [e3ffe688-9057-40a5-9f78-060da3484d75] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0605 18:32:10.643019   14557 system_pods.go:61] "storage-provisioner" [08f5838c-79f2-4067-84ea-70cf1e6bbec8] Running
	I0605 18:32:10.643027   14557 system_pods.go:74] duration metric: took 9.510074ms to wait for pod list to return data ...
	I0605 18:32:10.643036   14557 default_sa.go:34] waiting for default service account to be created ...
	I0605 18:32:10.647920   14557 default_sa.go:45] found service account: "default"
	I0605 18:32:10.648001   14557 default_sa.go:55] duration metric: took 4.957337ms for default service account to be created ...
	I0605 18:32:10.648027   14557 system_pods.go:116] waiting for k8s-apps to be running ...
	I0605 18:32:10.740560   14557 system_pods.go:86] 19 kube-system pods found
	I0605 18:32:10.740677   14557 system_pods.go:89] "amd-gpu-device-plugin-kg7z2" [72a513ae-8981-48d4-ab13-b4d8f6e6efed] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0605 18:32:10.741216   14557 system_pods.go:89] "coredns-674b8bbfcf-g9jrw" [9421456f-39de-4eca-b36d-c44346a687f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0605 18:32:10.741283   14557 system_pods.go:89] "coredns-674b8bbfcf-lnwdt" [f1344d29-33f8-4394-8cbf-1c5f4f7a37f6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0605 18:32:10.741302   14557 system_pods.go:89] "csi-hostpath-attacher-0" [bbb3e18c-c4c8-44d5-aa98-f7d9de1cf4a9] Pending
	I0605 18:32:10.741317   14557 system_pods.go:89] "csi-hostpathplugin-scz49" [1094ee0c-0496-45e9-8195-2abee8ac860b] Pending
	I0605 18:32:10.741337   14557 system_pods.go:89] "etcd-addons-191833" [9cdcb208-540e-492e-9639-7c5b7db2ce82] Running
	I0605 18:32:10.741364   14557 system_pods.go:89] "kube-apiserver-addons-191833" [9237f9c6-406a-470c-9a7f-8c26368af7e5] Running
	I0605 18:32:10.741380   14557 system_pods.go:89] "kube-controller-manager-addons-191833" [656668ad-bfcd-4abb-b6e8-24858521c220] Running
	I0605 18:32:10.741409   14557 system_pods.go:89] "kube-ingress-dns-minikube" [495e1c3e-27ca-4255-9f7f-8f6715337f59] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0605 18:32:10.741436   14557 system_pods.go:89] "kube-proxy-g2l9p" [4753a393-80f3-4543-bf47-fc080125b03b] Running
	I0605 18:32:10.741451   14557 system_pods.go:89] "kube-scheduler-addons-191833" [0b2b6c42-f781-44cb-82c3-fcfd1bd7ee2e] Running
	I0605 18:32:10.741468   14557 system_pods.go:89] "metrics-server-7fbb699795-5m64w" [96f6d999-ab98-4e15-81fd-bef3cfec2f61] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0605 18:32:10.741489   14557 system_pods.go:89] "nvidia-device-plugin-daemonset-6sz4x" [b05d034d-94e5-4669-a212-ca20ff817f1b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0605 18:32:10.741523   14557 system_pods.go:89] "registry-694bd45846-k78hm" [3fc306df-aca8-427f-9e3d-8f92e7b1cad6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0605 18:32:10.741548   14557 system_pods.go:89] "registry-creds-6b69cdcdd5-glqcp" [5e782afc-a232-44d7-801a-bbdd9acbe3bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0605 18:32:10.741565   14557 system_pods.go:89] "registry-proxy-mq24t" [ad022fb2-e887-4f8e-a153-c311bd3ff71b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0605 18:32:10.741582   14557 system_pods.go:89] "snapshot-controller-68b874b76f-24xzj" [284670af-0b6a-4461-b9b5-7fad71ddeda6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0605 18:32:10.741605   14557 system_pods.go:89] "snapshot-controller-68b874b76f-8khc5" [e3ffe688-9057-40a5-9f78-060da3484d75] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0605 18:32:10.741628   14557 system_pods.go:89] "storage-provisioner" [08f5838c-79f2-4067-84ea-70cf1e6bbec8] Running
	I0605 18:32:10.741647   14557 system_pods.go:126] duration metric: took 93.603138ms to wait for k8s-apps to be running ...
	I0605 18:32:10.741667   14557 system_svc.go:44] waiting for kubelet service to be running ....
	I0605 18:32:10.741740   14557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0605 18:32:10.812160   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0605 18:32:10.847164   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0605 18:32:11.127317   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.701635935s)
	I0605 18:32:11.127356   14557 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-191833"
	I0605 18:32:11.127989   14557 system_svc.go:56] duration metric: took 386.314935ms WaitForService to wait for kubelet
	I0605 18:32:11.128136   14557 kubeadm.go:578] duration metric: took 13.736107483s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0605 18:32:11.128178   14557 node_conditions.go:102] verifying NodePressure condition ...
	I0605 18:32:11.128091   14557 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.172324584s)
	I0605 18:32:11.129526   14557 out.go:177] * Verifying csi-hostpath-driver addon...
	I0605 18:32:11.129677   14557 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.3
	I0605 18:32:11.131901   14557 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0605 18:32:11.133094   14557 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0605 18:32:11.133119   14557 node_conditions.go:123] node cpu capacity is 8
	I0605 18:32:11.133134   14557 node_conditions.go:105] duration metric: took 4.915683ms to run NodePressure ...
	I0605 18:32:11.133149   14557 start.go:241] waiting for startup goroutines ...
	I0605 18:32:11.133224   14557 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0605 18:32:11.134605   14557 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0605 18:32:11.134634   14557 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0605 18:32:11.136783   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:11.137011   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:11.137171   14557 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0605 18:32:11.137189   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:11.249907   14557 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0605 18:32:11.249935   14557 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0605 18:32:11.439522   14557 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0605 18:32:11.439550   14557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0605 18:32:11.537893   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0605 18:32:11.635179   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:11.635372   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:11.636519   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:12.135045   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:12.136143   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:12.137750   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:12.635235   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:12.635430   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:12.635602   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:13.227856   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:13.228176   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:13.228251   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:13.635037   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:13.635393   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:13.635655   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:14.135079   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:14.135218   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:14.235294   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:14.634183   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:14.634312   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:14.634447   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:15.133404   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:15.133637   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:15.134712   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:15.635040   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:15.635107   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:15.635112   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:16.135838   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:16.135848   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:16.136329   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:16.634308   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:16.634426   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:16.634487   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:17.133290   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:17.133547   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:17.134618   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:17.159920   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: (6.347718027s)
	W0605 18:32:17.159967   14557 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:32:17.160001   14557 retry.go:31] will retry after 267.924119ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:32:17.160124   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.312918432s)
	I0605 18:32:17.160206   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (5.622282053s)
	I0605 18:32:17.161155   14557 addons.go:479] Verifying addon gcp-auth=true in "addons-191833"
	I0605 18:32:17.163994   14557 out.go:177] * Verifying gcp-auth addon...
	I0605 18:32:17.166178   14557 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0605 18:32:17.233910   14557 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0605 18:32:17.233936   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:17.428077   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0605 18:32:17.633369   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:17.634601   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:17.634817   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:17.669621   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:18.133981   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:18.134184   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:18.134319   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:18.169463   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:18.633681   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:18.633700   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:18.634823   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:18.669404   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:19.133715   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:19.133921   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:19.134502   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:19.169131   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:19.633486   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:19.634036   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:19.634466   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:19.725783   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:20.133796   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:20.134148   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:20.134404   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:20.169279   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:20.633873   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:20.633979   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:20.634971   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:20.734306   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:21.133271   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:21.133542   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:21.134896   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:21.169309   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:21.635327   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:21.635706   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:21.635800   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:21.725255   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:21.983719   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: (4.555606858s)
	W0605 18:32:21.983762   14557 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:32:21.983790   14557 retry.go:31] will retry after 707.712135ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:32:22.133633   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:22.133766   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:22.134936   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:22.169383   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:22.633527   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:22.633674   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:22.634950   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:22.669539   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:22.692655   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0605 18:32:23.133913   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:23.134158   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:23.134398   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:23.169011   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:23.633638   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:23.633797   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:23.634886   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:23.669294   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:24.134050   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:24.134716   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:24.134863   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:24.169172   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:24.679743   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:24.680127   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:24.680357   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:24.680438   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:25.133759   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:25.133794   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:25.134939   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:25.169601   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:25.634808   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:25.635182   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:25.635315   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:25.668867   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:25.693282   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.00059284s)
	W0605 18:32:25.693318   14557 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:32:25.693337   14557 retry.go:31] will retry after 1.238380792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:32:26.133431   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:26.133835   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:26.134044   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:26.169252   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:26.633533   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:26.633743   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:26.635445   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:26.670256   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:26.932377   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0605 18:32:27.133733   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:27.133930   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:27.134999   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:27.169452   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:27.634757   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:27.635313   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:27.636056   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:27.668856   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:28.134907   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:28.135018   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:28.135183   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:28.169097   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:28.633137   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:28.633912   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:28.634753   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:28.669131   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:29.134032   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:29.134076   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:29.134343   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:29.168919   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:29.634407   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:29.634623   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:29.634846   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:29.669045   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:30.133834   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:30.133966   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:30.134031   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:30.168417   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:30.633333   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:30.633477   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:30.634690   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:30.725778   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:31.052994   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: (4.120569525s)
	W0605 18:32:31.053081   14557 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:32:31.053114   14557 retry.go:31] will retry after 1.778183139s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:32:31.134319   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:31.134458   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:31.134647   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:31.169452   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:31.634267   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:31.634325   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:31.634463   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:31.669583   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:32.134489   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:32.134579   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:32.134758   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:32.168949   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:32.634216   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:32.634234   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:32.634224   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:32.668991   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:32.832206   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0605 18:32:33.134357   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:33.134489   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:33.134684   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:33.169380   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:33.633669   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:33.633736   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:33.634929   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:33.669223   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:34.133702   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:34.133867   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:34.134158   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:34.169594   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:34.633688   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:34.633855   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:34.634290   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:34.668986   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:35.133693   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:35.133749   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:35.134561   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:35.169076   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:35.633809   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:35.633982   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:35.635305   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:35.669290   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:36.133828   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:36.134171   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:36.134287   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:36.169779   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:36.480939   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.648691587s)
	W0605 18:32:36.480986   14557 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:32:36.481014   14557 retry.go:31] will retry after 2.150242068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:32:36.634074   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:36.634105   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:36.634224   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:36.669403   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:37.134129   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:37.134163   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:37.134327   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:37.234976   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:37.632850   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:37.634035   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:37.635192   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:37.725349   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:38.133704   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:38.134407   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:38.134757   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:38.169676   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:38.631506   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0605 18:32:38.634295   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:38.634330   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:38.634413   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:38.668890   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:39.134461   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:39.134482   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:39.134467   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:39.168729   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:39.634865   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:39.635141   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:39.635616   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:39.669019   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:40.133424   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:40.133697   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:40.134714   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:40.169092   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:40.633887   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:40.634149   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:40.634188   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:40.734926   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:41.134197   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:41.134209   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:41.134253   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 18:32:41.174036   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:41.633754   14557 kapi.go:107] duration metric: took 31.002723339s to wait for kubernetes.io/minikube-addons=registry ...
	I0605 18:32:41.634106   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:41.634267   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:41.669893   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:41.954978   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.323431254s)
	W0605 18:32:41.955020   14557 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:32:41.955048   14557 retry.go:31] will retry after 1.507715883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:32:42.134284   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:42.134393   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:42.225606   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:42.634048   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:42.635317   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:42.669758   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:43.133934   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:43.135495   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:43.168776   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:43.462996   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0605 18:32:43.633107   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:43.634971   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:43.669690   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:44.134169   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:44.134617   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:44.169516   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:44.633867   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:44.634318   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:44.669491   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:45.135224   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:45.135757   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:45.168824   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:45.633114   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:45.635417   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:45.724880   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:46.134680   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:46.134704   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:46.225851   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:46.633330   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:46.635565   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:46.669385   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:47.133488   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:47.135349   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:47.224809   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:47.633263   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:47.634499   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:47.725469   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:48.042976   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: (4.579940784s)
	W0605 18:32:48.043032   14557 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:32:48.043064   14557 retry.go:31] will retry after 5.23876125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:32:48.133974   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:48.134270   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:48.170143   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:48.636342   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:48.636634   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:48.725562   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:49.133809   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:49.134119   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:49.258503   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:49.634149   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:49.635146   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:49.669537   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:50.133922   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:50.134316   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:50.169671   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:50.633620   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:50.634242   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:50.734064   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:51.133283   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:51.134650   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:51.169099   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:51.724901   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:51.725085   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:51.725125   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:52.134439   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:52.134629   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:52.169254   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:52.633466   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:52.635233   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:52.669410   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:53.133621   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:53.135106   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:53.169219   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:53.282381   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0605 18:32:53.634779   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:53.635807   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:53.726690   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:54.135706   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:54.135884   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:54.225224   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:54.633039   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:54.634768   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:54.669252   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:55.133854   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:55.134595   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:55.169103   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:55.633884   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:55.634326   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:55.669101   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:56.134316   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:56.134523   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:56.169258   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:56.634110   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:56.634449   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:56.668828   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:57.133529   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:57.134859   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:57.169549   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:57.633315   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:57.635002   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:57.725512   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:57.949615   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: (4.667166305s)
	W0605 18:32:57.949659   14557 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:32:57.949690   14557 retry.go:31] will retry after 6.725611583s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:32:58.133367   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:58.135097   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:58.169381   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:58.633732   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:58.635071   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:58.702113   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:59.202630   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:32:59.202850   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:59.202899   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:59.633072   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:32:59.635059   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:32:59.669554   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:00.133715   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:00.134092   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:00.180247   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:00.724569   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:00.724787   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:00.724787   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:01.158501   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:01.158649   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:01.260207   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:01.633342   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:01.635609   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:01.669346   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:02.133557   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:02.135211   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:02.192558   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:02.634332   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:02.634413   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:02.668999   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:03.134299   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:03.134670   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:03.236671   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:03.633966   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:03.634362   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:03.669060   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:04.133392   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:04.134929   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:04.169673   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:04.634546   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:04.634634   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:04.669186   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:04.676328   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0605 18:33:05.133161   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:05.134243   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:05.169533   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:05.633526   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:05.634132   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:05.726193   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:06.134592   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:06.135008   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:06.168981   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:06.636395   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:06.727237   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:06.727515   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:07.135279   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:07.135467   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:07.169330   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:07.634408   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:07.634494   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:07.734700   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:08.133224   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:08.135181   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:08.169420   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:08.634251   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:08.634777   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:08.669557   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:09.076876   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: (4.400516073s)
	W0605 18:33:09.076913   14557 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:33:09.076938   14557 retry.go:31] will retry after 5.879970835s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:33:09.133894   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:09.134166   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:09.233950   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:09.633088   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:09.634974   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:09.669299   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:10.133884   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:10.134979   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:10.234303   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:10.633984   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:10.634938   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 18:33:10.734502   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:11.133481   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:11.134969   14557 kapi.go:107] duration metric: took 1m0.003067473s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0605 18:33:11.169451   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:11.634731   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:11.669335   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:12.133548   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:12.225163   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:12.633522   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:12.725286   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:13.133359   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:13.168826   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:13.634269   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:13.669032   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:14.133812   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:14.169441   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:14.633886   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:14.669403   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:14.957680   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0605 18:33:15.134054   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:15.169432   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:15.633845   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:15.672424   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:16.134252   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:16.225186   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:16.633888   14557 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 18:33:16.669028   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:17.228256   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:17.228330   14557 kapi.go:107] duration metric: took 1m6.598160129s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0605 18:33:17.669872   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:17.878315   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: (2.92059573s)
	W0605 18:33:17.878352   14557 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:33:17.878386   14557 retry.go:31] will retry after 17.856838782s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:33:18.170021   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:18.669120   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:19.168837   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:19.668813   14557 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 18:33:20.242937   14557 kapi.go:107] duration metric: took 1m3.076755296s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0605 18:33:20.244767   14557 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-191833 cluster.
	I0605 18:33:20.246316   14557 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0605 18:33:20.247619   14557 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0605 18:33:35.736819   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0605 18:33:37.814288   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: (2.07743348s)
	W0605 18:33:37.814324   14557 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:33:37.814346   14557 retry.go:31] will retry after 13.776816519s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	I0605 18:33:51.591509   14557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0605 18:33:53.721799   14557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: (2.130239258s)
	W0605 18:33:53.721842   14557 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) &lt;= 1
	 | .....................................................................................................^
	W0605 18:33:53.722108   14557 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.1/kubectl apply --force -f /etc/kubernetes/addons/volcano-deployment.yaml: Process exited with status 1
	stdout:
	namespace/volcano-system unchanged
	namespace/volcano-monitoring unchanged
	serviceaccount/volcano-admission unchanged
	configmap/volcano-admission-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-admission unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-admission-role unchanged
	service/volcano-admission-service unchanged
	deployment.apps/volcano-admission unchanged
	serviceaccount/volcano-admission-init unchanged
	role.rbac.authorization.k8s.io/volcano-admission-init unchanged
	rolebinding.rbac.authorization.k8s.io/volcano-admission-init-role unchanged
	job.batch/volcano-admission-init unchanged
	customresourcedefinition.apiextensions.k8s.io/jobs.batch.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/commands.bus.volcano.sh unchanged
	serviceaccount/volcano-controllers unchanged
	configmap/volcano-controller-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-controllers unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-controllers-role unchanged
	service/volcano-controllers-service unchanged
	deployment.apps/volcano-controllers unchanged
	serviceaccount/volcano-scheduler unchanged
	configmap/volcano-scheduler-configmap unchanged
	clusterrole.rbac.authorization.k8s.io/volcano-scheduler unchanged
	clusterrolebinding.rbac.authorization.k8s.io/volcano-scheduler-role unchanged
	service/volcano-scheduler-service unchanged
	deployment.apps/volcano-scheduler unchanged
	customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/queues.scheduling.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/numatopologies.nodeinfo.volcano.sh unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-mutate unchanged
	mutatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-mutate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-jobs-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-queues-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-podgroups-validate unchanged
	validatingwebhookconfiguration.admissionregistration.k8s.io/volcano-admission-service-hypernodes-validate configured
	customresourcedefinition.apiextensions.k8s.io/jobtemplates.flow.volcano.sh unchanged
	customresourcedefinition.apiextensions.k8s.io/jobflows.flow.volcano.sh unchanged
	
	stderr:
	The CustomResourceDefinition "hypernodes.topology.volcano.sh" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[members].items.properties[selector].x-kubernetes-validations[1].rule: Invalid value: apiextensions.ValidationRule{Rule:"(has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) <= 1", Message:"Only one of ExactMatch, RegexMatch, or LabelMatch can be specified", MessageExpression:"", Reason:(*apiextensions.FieldValueErrorReason)(nil), FieldPath:"", OptionalOldSelf:(*bool)(nil)}: compilation failed: ERROR: <input>:1:98: Syntax error: token recognition error at: '&l'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) <= 1
	 | .................................................................................................^
	ERROR: <input>:1:100: Syntax error: mismatched input 't' expecting <EOF>
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) <= 1
	 | ...................................................................................................^
	ERROR: <input>:1:101: Syntax error: token recognition error at: ';'
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) <= 1
	 | ....................................................................................................^
	ERROR: <input>:1:102: Syntax error: token recognition error at: '= '
	 | (has(self.exactMatch) ? 1 : 0) + (has(self.regexMatch) ? 1 : 0) + (has(self.labelMatch) ? 1 : 0) <= 1
	 | .....................................................................................................^
	]
	I0605 18:33:53.723971   14557 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, storage-provisioner-rancher, registry-creds, amd-gpu-device-plugin, inspektor-gadget, metrics-server, ingress-dns, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0605 18:33:53.725109   14557 addons.go:514] duration metric: took 1m56.33307045s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner storage-provisioner-rancher registry-creds amd-gpu-device-plugin inspektor-gadget metrics-server ingress-dns yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0605 18:33:53.725150   14557 start.go:246] waiting for cluster config update ...
	I0605 18:33:53.725176   14557 start.go:255] writing updated cluster config ...
	I0605 18:33:53.725391   14557 ssh_runner.go:195] Run: rm -f paused
	I0605 18:33:53.728507   14557 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0605 18:33:53.731790   14557 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-lnwdt" in "kube-system" namespace to be "Ready" or be gone ...
	I0605 18:33:53.735282   14557 pod_ready.go:94] pod "coredns-674b8bbfcf-lnwdt" is "Ready"
	I0605 18:33:53.735299   14557 pod_ready.go:86] duration metric: took 3.485793ms for pod "coredns-674b8bbfcf-lnwdt" in "kube-system" namespace to be "Ready" or be gone ...
	I0605 18:33:53.736806   14557 pod_ready.go:83] waiting for pod "etcd-addons-191833" in "kube-system" namespace to be "Ready" or be gone ...
	I0605 18:33:53.739735   14557 pod_ready.go:94] pod "etcd-addons-191833" is "Ready"
	I0605 18:33:53.739751   14557 pod_ready.go:86] duration metric: took 2.928396ms for pod "etcd-addons-191833" in "kube-system" namespace to be "Ready" or be gone ...
	I0605 18:33:53.741248   14557 pod_ready.go:83] waiting for pod "kube-apiserver-addons-191833" in "kube-system" namespace to be "Ready" or be gone ...
	I0605 18:33:53.744375   14557 pod_ready.go:94] pod "kube-apiserver-addons-191833" is "Ready"
	I0605 18:33:53.744391   14557 pod_ready.go:86] duration metric: took 3.124863ms for pod "kube-apiserver-addons-191833" in "kube-system" namespace to be "Ready" or be gone ...
	I0605 18:33:53.745693   14557 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-191833" in "kube-system" namespace to be "Ready" or be gone ...
	I0605 18:33:54.131738   14557 pod_ready.go:94] pod "kube-controller-manager-addons-191833" is "Ready"
	I0605 18:33:54.131769   14557 pod_ready.go:86] duration metric: took 386.058263ms for pod "kube-controller-manager-addons-191833" in "kube-system" namespace to be "Ready" or be gone ...
	I0605 18:33:54.332342   14557 pod_ready.go:83] waiting for pod "kube-proxy-g2l9p" in "kube-system" namespace to be "Ready" or be gone ...
	I0605 18:33:54.731759   14557 pod_ready.go:94] pod "kube-proxy-g2l9p" is "Ready"
	I0605 18:33:54.731790   14557 pod_ready.go:86] duration metric: took 399.417825ms for pod "kube-proxy-g2l9p" in "kube-system" namespace to be "Ready" or be gone ...
	I0605 18:33:54.932582   14557 pod_ready.go:83] waiting for pod "kube-scheduler-addons-191833" in "kube-system" namespace to be "Ready" or be gone ...
	I0605 18:33:55.331668   14557 pod_ready.go:94] pod "kube-scheduler-addons-191833" is "Ready"
	I0605 18:33:55.331696   14557 pod_ready.go:86] duration metric: took 399.089236ms for pod "kube-scheduler-addons-191833" in "kube-system" namespace to be "Ready" or be gone ...
	I0605 18:33:55.331707   14557 pod_ready.go:40] duration metric: took 1.603169221s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0605 18:33:55.375314   14557 start.go:607] kubectl: 1.33.1, cluster: 1.33.1 (minor skew: 0)
	I0605 18:33:55.378329   14557 out.go:177] * Done! kubectl is now configured to use "addons-191833" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 05 18:33:05 addons-191833 dockerd[1454]: time="2025-06-05T18:33:05.941326379Z" level=info msg="ignoring event" container=461431ec0bdc6048c32e9df04be94a36ba085a68481fbf84ae2a46220c9082f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 05 18:33:06 addons-191833 cri-dockerd[1760]: time="2025-06-05T18:33:06Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5: Status: Downloaded newer image for registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5"
	Jun 05 18:33:06 addons-191833 dockerd[1454]: time="2025-06-05T18:33:06.397219494Z" level=info msg="ignoring event" container=bc983287805bac911709fa5c353bfef7d6eeccff1e28d06acf69d85b26a7f7d9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 05 18:33:06 addons-191833 dockerd[1454]: time="2025-06-05T18:33:06.631877710Z" level=warning msg="reference for unknown type: " digest="sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0" remote="registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0"
	Jun 05 18:33:07 addons-191833 cri-dockerd[1760]: time="2025-06-05T18:33:07Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/livenessprobe:v2.8.0@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0: Status: Downloaded newer image for registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0"
	Jun 05 18:33:07 addons-191833 dockerd[1454]: time="2025-06-05T18:33:07.660870382Z" level=warning msg="reference for unknown type: " digest="sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8" remote="registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8"
	Jun 05 18:33:08 addons-191833 cri-dockerd[1760]: time="2025-06-05T18:33:08Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8: Status: Downloaded newer image for registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8"
	Jun 05 18:33:09 addons-191833 dockerd[1454]: time="2025-06-05T18:33:09.187827799Z" level=warning msg="reference for unknown type: " digest="sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f" remote="registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f"
	Jun 05 18:33:10 addons-191833 cri-dockerd[1760]: time="2025-06-05T18:33:10Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f: Status: Downloaded newer image for registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f"
	Jun 05 18:33:11 addons-191833 cri-dockerd[1760]: time="2025-06-05T18:33:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4193be59f2ac8bab56370ff3ce7812f55475700be2edc0f5a100465a7c33afe9/resolv.conf as [nameserver 10.96.0.10 search ingress-nginx.svc.cluster.local svc.cluster.local cluster.local local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Jun 05 18:33:11 addons-191833 dockerd[1454]: time="2025-06-05T18:33:11.351560012Z" level=warning msg="reference for unknown type: " digest="sha256:03497ee984628e95eca9b2279e3f3a3c1685dd48635479e627d219f00c8eefa9" remote="registry.k8s.io/ingress-nginx/controller@sha256:03497ee984628e95eca9b2279e3f3a3c1685dd48635479e627d219f00c8eefa9"
	Jun 05 18:33:13 addons-191833 cri-dockerd[1760]: time="2025-06-05T18:33:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e0d3aa4f17841cc6963cd6206f91394fc5a35815a4588522b2a4c0f973229920/resolv.conf as [nameserver 10.96.0.10 search volcano-system.svc.cluster.local svc.cluster.local cluster.local local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Jun 05 18:33:15 addons-191833 cri-dockerd[1760]: time="2025-06-05T18:33:15Z" level=info msg="Stop pulling image registry.k8s.io/ingress-nginx/controller:v1.12.2@sha256:03497ee984628e95eca9b2279e3f3a3c1685dd48635479e627d219f00c8eefa9: Status: Downloaded newer image for registry.k8s.io/ingress-nginx/controller@sha256:03497ee984628e95eca9b2279e3f3a3c1685dd48635479e627d219f00c8eefa9"
	Jun 05 18:33:16 addons-191833 cri-dockerd[1760]: time="2025-06-05T18:33:16Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-webhook-manager:v1.12.1@sha256:f8b50088a7329220cbdcc624067943a76a005bb18bda77647e618aab26cf759d: Status: Image is up to date for volcanosh/vc-webhook-manager@sha256:f8b50088a7329220cbdcc624067943a76a005bb18bda77647e618aab26cf759d"
	Jun 05 18:33:17 addons-191833 cri-dockerd[1760]: time="2025-06-05T18:33:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d5104f4d08bad8b7e2031a01165f167e8c0e816772dab441ff1e89201b439c9e/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Jun 05 18:33:17 addons-191833 dockerd[1454]: time="2025-06-05T18:33:17.669284450Z" level=warning msg="reference for unknown type: " digest="sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7"
	Jun 05 18:33:19 addons-191833 cri-dockerd[1760]: time="2025-06-05T18:33:19Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7"
	Jun 05 18:33:21 addons-191833 cri-dockerd[1760]: time="2025-06-05T18:33:21Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.41.0@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469"
	Jun 05 18:33:23 addons-191833 dockerd[1454]: time="2025-06-05T18:33:23.564603710Z" level=info msg="ignoring event" container=9410bd2779ba7a298844c2802f1fef07bcba189b078bf30d26f222b9cd03687b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 05 18:33:43 addons-191833 cri-dockerd[1760]: time="2025-06-05T18:33:43Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.41.0@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469"
	Jun 05 18:33:45 addons-191833 dockerd[1454]: time="2025-06-05T18:33:45.385292033Z" level=info msg="ignoring event" container=e75d61d49553c99004083251023306c9b7187a8308fed9154ec2749d29ccd1b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 05 18:34:29 addons-191833 cri-dockerd[1760]: time="2025-06-05T18:34:29Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.41.0@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469"
	Jun 05 18:34:31 addons-191833 dockerd[1454]: time="2025-06-05T18:34:31.393401748Z" level=info msg="ignoring event" container=6c8d736db9169c7812b09f34305403a7628354433fcb6094a5dd0e1d66ed9f79 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 05 18:35:51 addons-191833 cri-dockerd[1760]: time="2025-06-05T18:35:51Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.41.0@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469"
	Jun 05 18:35:53 addons-191833 dockerd[1454]: time="2025-06-05T18:35:53.397225914Z" level=info msg="ignoring event" container=96aecb94ea44e8dfc89f2e83c540f2c65fd42f90e36d4d38edda3025b575330d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	96aecb94ea44e       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469                            About a minute ago   Exited              gadget                                   5                   e443c0c2b7a60       gadget-dr2vr
	4585fec300a6d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7                                 3 minutes ago        Running             gcp-auth                                 0                   d5104f4d08bad       gcp-auth-cd9db85c-rgf4b
	58bb802efe690       volcanosh/vc-webhook-manager@sha256:f8b50088a7329220cbdcc624067943a76a005bb18bda77647e618aab26cf759d                                         3 minutes ago        Running             admission                                0                   e0d3aa4f17841       volcano-admission-55859c8887-rd6kw
	bedd414fd55af       registry.k8s.io/ingress-nginx/controller@sha256:03497ee984628e95eca9b2279e3f3a3c1685dd48635479e627d219f00c8eefa9                             3 minutes ago        Running             controller                               0                   4193be59f2ac8       ingress-nginx-controller-67c5cb88f-bcf65
	346780f3e2215       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          4 minutes ago        Running             csi-snapshotter                          0                   22cb52cad0248       csi-hostpathplugin-scz49
	dce27d29a06c5       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          4 minutes ago        Running             csi-provisioner                          0                   22cb52cad0248       csi-hostpathplugin-scz49
	31a552f1f3329       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            4 minutes ago        Running             liveness-probe                           0                   22cb52cad0248       csi-hostpathplugin-scz49
	2f744e6f3e7c4       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           4 minutes ago        Running             hostpath                                 0                   22cb52cad0248       csi-hostpathplugin-scz49
	1ca19197de7c7       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                4 minutes ago        Running             node-driver-registrar                    0                   22cb52cad0248       csi-hostpathplugin-scz49
	76240e22b80dd       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              4 minutes ago        Running             csi-resizer                              0                   dda78bda38145       csi-hostpath-resizer-0
	897fd03fe1ade       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   4 minutes ago        Running             csi-external-health-monitor-controller   0                   22cb52cad0248       csi-hostpathplugin-scz49
	8b03bc8c25067       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             4 minutes ago        Running             csi-attacher                             0                   4c9509cf2b2e6       csi-hostpath-attacher-0
	58e1734598697       volcanosh/vc-scheduler@sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2                                               4 minutes ago        Running             volcano-scheduler                        0                   8dc63280426d5       volcano-scheduler-854568c9bb-vpbr4
	3c6bf99f0bc8c       volcanosh/vc-controller-manager@sha256:3815883c32f62c3a60b8208ba834f304d91d8f05cddfabd440aa15f7f8bef296                                      4 minutes ago        Running             volcano-controllers                      0                   da0c347b2227c       volcano-controllers-7b774bbd55-9ssw4
	7f30796108e7f       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      4 minutes ago        Running             volume-snapshot-controller               0                   f701587098491       snapshot-controller-68b874b76f-24xzj
	338f74c4e9a6a       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      4 minutes ago        Running             volume-snapshot-controller               0                   12adb6f97a1dc       snapshot-controller-68b874b76f-8khc5
	42f532dc4bafa       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2cf4ebfa82a37c357455458f6dfc334aea1392d508270b2517795a9933a02524                   4 minutes ago        Exited              patch                                    0                   3491f86c3522d       ingress-nginx-admission-patch-frjxm
	9be59bb27924d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2cf4ebfa82a37c357455458f6dfc334aea1392d508270b2517795a9933a02524                   4 minutes ago        Exited              create                                   0                   5bfd1259dfe60       ingress-nginx-admission-create-g2zqd
	12ced5ef4d0b4       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        4 minutes ago        Running             metrics-server                           0                   c0f199b137ff6       metrics-server-7fbb699795-5m64w
	e533262fd9a69       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        4 minutes ago        Running             yakd                                     0                   02764165aa3b9       yakd-dashboard-575dd5996b-b5wsb
	cf0058369fc12       gcr.io/k8s-minikube/kube-registry-proxy@sha256:f832bbe1d48c62de040bd793937eaa0c05d2f945a55376a99c80a4dd9961aeb1                              4 minutes ago        Running             registry-proxy                           0                   509dbf8b960b4       registry-proxy-mq24t
	e409996b95c5d       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       4 minutes ago        Running             local-path-provisioner                   0                   2598d5efd2ec6       local-path-provisioner-76f89f99b5-scszn
	1da4b554966e0       registry@sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7                                                             4 minutes ago        Running             registry                                 0                   7d7c51cd2a901       registry-694bd45846-k78hm
	2c5b1d406d06f       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             4 minutes ago        Running             minikube-ingress-dns                     0                   9287fcdde27bf       kube-ingress-dns-minikube
	4cb2924c46fe5       gcr.io/cloud-spanner-emulator/emulator@sha256:f98725ceb484500d858d17916ea4a04e2a83184b5a080a87113770e82c177744                               4 minutes ago        Running             cloud-spanner-emulator                   0                   cbfa9ad98ad02       cloud-spanner-emulator-694f8b9456-4nxg6
	6368152b35873       rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                               4 minutes ago        Running             amd-gpu-device-plugin                    0                   5ed2893d78cf3       amd-gpu-device-plugin-kg7z2
	959b8c26ec368       nvcr.io/nvidia/k8s-device-plugin@sha256:037160a36de0f060fc21cc0cb2f795d980282ff1471b55530433ca4350b24c4f                                     4 minutes ago        Running             nvidia-device-plugin-ctr                 0                   b750e8d22c90c       nvidia-device-plugin-daemonset-6sz4x
	5498ab0fdb9d7       6e38f40d628db                                                                                                                                5 minutes ago        Running             storage-provisioner                      0                   732264aad4e41       storage-provisioner
	cfa642997eb12       1cf5f116067c6                                                                                                                                5 minutes ago        Running             coredns                                  0                   c6398fa4a4081       coredns-674b8bbfcf-lnwdt
	a95bb1b624185       b79c189b052cd                                                                                                                                5 minutes ago        Running             kube-proxy                               0                   152c502b8a1f2       kube-proxy-g2l9p
	1c9a9ec763feb       398c985c0d950                                                                                                                                5 minutes ago        Running             kube-scheduler                           0                   61e30c545b9f0       kube-scheduler-addons-191833
	5ab679ff60ded       499038711c081                                                                                                                                5 minutes ago        Running             etcd                                     0                   2dede7c878e6a       etcd-addons-191833
	6836584e4e247       ef43894fa110c                                                                                                                                5 minutes ago        Running             kube-controller-manager                  0                   bf2b21de644e6       kube-controller-manager-addons-191833
	668f6f5e53464       c6ab243b29f82                                                                                                                                5 minutes ago        Running             kube-apiserver                           0                   d14061ad9fd2c       kube-apiserver-addons-191833
	
	
	==> controller_ingress [bedd414fd55a] <==
	  Build:         7995f327cd0c228bda326a9e287ba559799bffe0
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	W0605 18:33:16.247630       7 client_config.go:667] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0605 18:33:16.247836       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0605 18:33:16.254462       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="33" git="v1.33.1" state="clean" commit="8adc0f041b8e7ad1d30e29cc59c6ae7a15e19828" platform="linux/amd64"
	I0605 18:33:16.656614       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0605 18:33:16.665474       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0605 18:33:16.672480       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0605 18:33:16.676367       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"9e72b101-c6b0-4e06-a28e-ef37ea785883", APIVersion:"v1", ResourceVersion:"696", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0605 18:33:16.679620       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"d8a9f7c5-0383-4709-9198-25b0dc2d20b2", APIVersion:"v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0605 18:33:16.679693       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"46f60473-b34e-4d8a-b336-d9502112d539", APIVersion:"v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0605 18:33:17.873878       7 nginx.go:317] "Starting NGINX process"
	I0605 18:33:17.873923       7 leaderelection.go:257] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0605 18:33:17.874207       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0605 18:33:17.874461       7 controller.go:196] "Configuration changes detected, backend reload required"
	I0605 18:33:17.880244       7 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0605 18:33:17.880344       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-67c5cb88f-bcf65"
	I0605 18:33:17.882861       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-67c5cb88f-bcf65" node="addons-191833"
	I0605 18:33:17.902863       7 controller.go:216] "Backend successfully reloaded"
	I0605 18:33:17.902936       7 controller.go:227] "Initial sync, sleeping for 1 second"
	I0605 18:33:17.902978       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-67c5cb88f-bcf65", UID:"885788ea-8e91-4d5b-8ecd-53895db0539f", APIVersion:"v1", ResourceVersion:"738", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [cfa642997eb1] <==
	[INFO] 10.244.0.8:55296 - 52335 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003120547s
	[INFO] 10.244.0.8:42080 - 29904 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.004957167s
	[INFO] 10.244.0.8:42080 - 29499 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005153736s
	[INFO] 10.244.0.8:36036 - 16221 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005262668s
	[INFO] 10.244.0.8:36036 - 15810 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005707591s
	[INFO] 10.244.0.8:58773 - 25488 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005097176s
	[INFO] 10.244.0.8:58773 - 25145 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005107856s
	[INFO] 10.244.0.8:40477 - 33480 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000144818s
	[INFO] 10.244.0.8:40477 - 33223 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000141423s
	[INFO] 10.244.0.27:48709 - 49647 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000295357s
	[INFO] 10.244.0.27:60199 - 33147 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000384919s
	[INFO] 10.244.0.27:50687 - 2331 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124394s
	[INFO] 10.244.0.27:49290 - 1091 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000176424s
	[INFO] 10.244.0.27:41488 - 42273 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00012267s
	[INFO] 10.244.0.27:46098 - 8373 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000170941s
	[INFO] 10.244.0.27:52818 - 25101 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003286258s
	[INFO] 10.244.0.27:41276 - 63840 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.00336291s
	[INFO] 10.244.0.27:40521 - 61448 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.005307561s
	[INFO] 10.244.0.27:58455 - 17717 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.005728973s
	[INFO] 10.244.0.27:50030 - 60352 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004779592s
	[INFO] 10.244.0.27:36159 - 8494 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004816923s
	[INFO] 10.244.0.27:52716 - 24963 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004569708s
	[INFO] 10.244.0.27:46757 - 48054 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005602804s
	[INFO] 10.244.0.27:46255 - 37062 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000768757s
	[INFO] 10.244.0.27:34577 - 36570 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001011066s
	
	
	==> describe nodes <==
	Name:               addons-191833
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-191833
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=01246adb8f85a16e6fd2bbeecb0ebb43de6563df
	                    minikube.k8s.io/name=addons-191833
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_06_05T18_31_52_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-191833
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-191833"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Jun 2025 18:31:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-191833
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Jun 2025 18:37:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Jun 2025 18:33:22 +0000   Thu, 05 Jun 2025 18:31:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Jun 2025 18:33:22 +0000   Thu, 05 Jun 2025 18:31:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Jun 2025 18:33:22 +0000   Thu, 05 Jun 2025 18:31:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Jun 2025 18:33:22 +0000   Thu, 05 Jun 2025 18:31:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-191833
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 6bdbd6aee04f4d81a2250a84015e7d67
	  System UUID:                97c9b11a-e410-4b94-a368-1c978f99c4c2
	  Boot ID:                    06a8d87c-b728-4070-a8c6-b6c22bb6e8e6
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.1.1
	  Kubelet Version:            v1.33.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-694f8b9456-4nxg6     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  gadget                      gadget-dr2vr                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  gcp-auth                    gcp-auth-cd9db85c-rgf4b                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  ingress-nginx               ingress-nginx-controller-67c5cb88f-bcf65    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         5m5s
	  kube-system                 amd-gpu-device-plugin-kg7z2                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 coredns-674b8bbfcf-lnwdt                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m15s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 csi-hostpathplugin-scz49                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 etcd-addons-191833                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m22s
	  kube-system                 kube-apiserver-addons-191833                250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-controller-manager-addons-191833       200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-proxy-g2l9p                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-scheduler-addons-191833                100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 metrics-server-7fbb699795-5m64w             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         5m7s
	  kube-system                 nvidia-device-plugin-daemonset-6sz4x        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 registry-694bd45846-k78hm                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 registry-creds-6b69cdcdd5-glqcp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 registry-proxy-mq24t                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 snapshot-controller-68b874b76f-24xzj        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 snapshot-controller-68b874b76f-8khc5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  local-path-storage          local-path-provisioner-76f89f99b5-scszn     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  volcano-system              volcano-admission-55859c8887-rd6kw          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  volcano-system              volcano-controllers-7b774bbd55-9ssw4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  volcano-system              volcano-scheduler-854568c9bb-vpbr4          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  yakd-dashboard              yakd-dashboard-575dd5996b-b5wsb             0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     5m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  0 (0%)
	  memory             588Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m11s                  kube-proxy       
	  Warning  CgroupV1                 5m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m27s (x8 over 5m27s)  kubelet          Node addons-191833 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m27s (x8 over 5m27s)  kubelet          Node addons-191833 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m27s (x7 over 5m27s)  kubelet          Node addons-191833 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 5m21s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m21s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  5m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  5m21s                  kubelet          Node addons-191833 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m21s                  kubelet          Node addons-191833 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m21s                  kubelet          Node addons-191833 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m17s                  node-controller  Node addons-191833 event: Registered Node addons-191833 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 cb 72 dc ef 92 08 06
	[  +0.496886] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 1f 22 d2 ae cb 08 06
	[  +1.271604] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 14 85 51 d2 e5 08 06
	[Jun 5 18:33] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 44 cd 2f e3 47 08 06
	[  +2.949065] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 e4 49 b2 b1 16 08 06
	[  +0.100982] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 17 b9 a8 08 88 08 06
	[  +3.793951] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e 09 31 9f b6 9a 08 06
	[  +0.312194] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 d0 5e a2 47 51 08 06
	[  +0.083696] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 2e 6f 07 5f 1d 08 06
	[  +8.618398] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 d1 d9 7a 0e 90 08 06
	[  +0.102282] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 76 f9 b9 11 c5 08 06
	[  +3.583540] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0e fd 05 8b ff a1 08 06
	[  +0.000433] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 7e 45 ed 2d 99 3c 08 06
	
	
	==> etcd [5ab679ff60de] <==
	{"level":"info","ts":"2025-06-05T18:31:46.834136Z","caller":"embed/etcd.go:908","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-06-05T18:31:46.834130Z","caller":"embed/etcd.go:633","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-06-05T18:31:46.834185Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-06-05T18:31:47.323815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2025-06-05T18:31:47.323862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2025-06-05T18:31:47.323881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2025-06-05T18:31:47.323911Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2025-06-05T18:31:47.323990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-06-05T18:31:47.324008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2025-06-05T18:31:47.324019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-06-05T18:31:47.324908Z","caller":"etcdserver/server.go:2697","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-06-05T18:31:47.325397Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-05T18:31:47.325399Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-191833 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-06-05T18:31:47.325425Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-05T18:31:47.325616Z","caller":"membership/cluster.go:587","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-06-05T18:31:47.325698Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-06-05T18:31:47.325724Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-06-05T18:31:47.325749Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-06-05T18:31:47.325781Z","caller":"etcdserver/server.go:2721","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-06-05T18:31:47.326369Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-05T18:31:47.326504Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-05T18:31:47.327104Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-06-05T18:31:47.327179Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2025-06-05T18:32:24.677991Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.782002ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs/volcano-system/volcano-admission-init\" limit:1 ","response":"range_response_count:1 size:3662"}
	{"level":"info","ts":"2025-06-05T18:32:24.678091Z","caller":"traceutil/trace.go:171","msg":"trace[1142059219] range","detail":"{range_begin:/registry/jobs/volcano-system/volcano-admission-init; range_end:; response_count:1; response_revision:1067; }","duration":"121.917604ms","start":"2025-06-05T18:32:24.556159Z","end":"2025-06-05T18:32:24.678076Z","steps":["trace[1142059219] 'range keys from in-memory index tree'  (duration: 121.697864ms)"],"step_count":1}
	
	
	==> gcp-auth [4585fec300a6] <==
	2025/06/05 18:33:19 GCP Auth Webhook started!
	2025/06/05 18:34:10 Ready to marshal response ...
	2025/06/05 18:34:10 Ready to write response ...
	
	
	==> kernel <==
	 18:37:12 up 19 min,  0 users,  load average: 0.31, 0.58, 0.32
	Linux addons-191833 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [668f6f5e5346] <==
	W0605 18:32:52.736769       1 handler_proxy.go:99] no RequestInfo found in the context
	E0605 18:32:52.736846       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0605 18:32:52.736915       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.163.93:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.163.93:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.163.93:443: connect: connection refused" logger="UnhandledError"
	E0605 18:32:52.738482       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.163.93:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.163.93:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.163.93:443: connect: connection refused" logger="UnhandledError"
	I0605 18:32:52.839468       1 handler.go:288] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0605 18:33:00.394736       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.235.132:443: connect: connection refused
	W0605 18:33:01.446360       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.235.132:443: connect: connection refused
	W0605 18:33:02.450484       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.235.132:443: connect: connection refused
	W0605 18:33:03.530257       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.235.132:443: connect: connection refused
	W0605 18:33:04.532851       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.235.132:443: connect: connection refused
	W0605 18:33:05.540606       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.235.132:443: connect: connection refused
	W0605 18:33:06.630386       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.235.132:443: connect: connection refused
	W0605 18:33:07.725412       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.235.132:443: connect: connection refused
	W0605 18:33:08.825855       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.235.132:443: connect: connection refused
	W0605 18:33:09.925374       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.235.132:443: connect: connection refused
	W0605 18:33:10.984966       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.235.132:443: connect: connection refused
	W0605 18:33:12.025171       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.235.132:443: connect: connection refused
	W0605 18:33:13.066595       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.235.132:443: connect: connection refused
	W0605 18:33:14.105938       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.235.132:443: connect: connection refused
	W0605 18:33:15.172575       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.235.132:443: connect: connection refused
	W0605 18:33:16.225433       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.235.132:443: connect: connection refused
	I0605 18:34:10.649632       1 controller.go:667] quota admission added evaluator for: jobs.batch.volcano.sh
	
	
	==> kube-controller-manager [6836584e4e24] <==
	I0605 18:31:56.150307       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrapproving"
	I0605 18:31:56.233931       1 shared_informer.go:357] "Caches are synced" controller="TTL after finished"
	I0605 18:31:56.236200       1 shared_informer.go:357] "Caches are synced" controller="job"
	I0605 18:31:56.284520       1 shared_informer.go:357] "Caches are synced" controller="cronjob"
	I0605 18:31:56.338669       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0605 18:31:56.340884       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0605 18:31:56.751064       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0605 18:31:56.783712       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0605 18:31:56.783731       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0605 18:31:56.783736       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0605 18:32:04.926511       1 replica_set.go:562] "Unhandled Error" err="sync \"kube-system/metrics-server-7fbb699795\" failed with pods \"metrics-server-7fbb699795-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E0605 18:32:04.936040       1 replica_set.go:562] "Unhandled Error" err="sync \"kube-system/metrics-server-7fbb699795\" failed with pods \"metrics-server-7fbb699795-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E0605 18:32:26.344934       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0605 18:32:26.346370       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podgroups.scheduling.volcano.sh"
	I0605 18:32:26.346409       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0605 18:32:26.346446       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch.volcano.sh"
	I0605 18:32:26.346479       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="traces.gadget.kinvolk.io"
	I0605 18:32:26.346519       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="commands.bus.volcano.sh"
	I0605 18:32:26.346540       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobtemplates.flow.volcano.sh"
	I0605 18:32:26.346574       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobflows.flow.volcano.sh"
	I0605 18:32:26.346635       1 shared_informer.go:350] "Waiting for caches to sync" controller="resource quota"
	I0605 18:32:26.447476       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0605 18:32:26.758865       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0605 18:32:26.762493       1 shared_informer.go:350] "Waiting for caches to sync" controller="garbage collector"
	I0605 18:32:26.863202       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [a95bb1b62418] <==
	I0605 18:31:59.640771       1 server_linux.go:63] "Using iptables proxy"
	I0605 18:32:00.141215       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0605 18:32:00.141293       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0605 18:32:00.634312       1 server.go:254] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0605 18:32:00.634376       1 server_linux.go:145] "Using iptables Proxier"
	I0605 18:32:00.641784       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0605 18:32:00.723685       1 server.go:516] "Version info" version="v1.33.1"
	I0605 18:32:00.723714       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0605 18:32:00.725946       1 config.go:199] "Starting service config controller"
	I0605 18:32:00.725963       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0605 18:32:00.725993       1 config.go:105] "Starting endpoint slice config controller"
	I0605 18:32:00.725998       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0605 18:32:00.726012       1 config.go:440] "Starting serviceCIDR config controller"
	I0605 18:32:00.726017       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0605 18:32:00.726814       1 config.go:329] "Starting node config controller"
	I0605 18:32:00.726822       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0605 18:32:00.830643       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0605 18:32:00.830689       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0605 18:32:00.830727       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0605 18:32:00.830847       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1c9a9ec763fe] <==
	I0605 18:31:49.346851       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0605 18:31:49.346984       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0605 18:31:49.347305       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0605 18:31:49.425728       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0605 18:31:49.425890       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0605 18:31:49.426581       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0605 18:31:49.426040       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0605 18:31:49.426047       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0605 18:31:49.426100       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0605 18:31:49.426129       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0605 18:31:49.426233       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0605 18:31:49.426336       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0605 18:31:49.426340       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0605 18:31:49.426441       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0605 18:31:49.425971       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0605 18:31:49.426688       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0605 18:31:49.426976       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0605 18:31:49.427051       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0605 18:31:49.427178       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0605 18:31:50.250234       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0605 18:31:50.337515       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0605 18:31:50.352085       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0605 18:31:50.384410       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0605 18:31:50.386268       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I0605 18:31:50.947466       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Jun 05 18:35:51 addons-191833 kubelet[2661]: I0605 18:35:51.632135    2661 scope.go:117] "RemoveContainer" containerID="6c8d736db9169c7812b09f34305403a7628354433fcb6094a5dd0e1d66ed9f79"
	Jun 05 18:35:51 addons-191833 kubelet[2661]: I0605 18:35:51.738096    2661 scope.go:117] "RemoveContainer" containerID="6c8d736db9169c7812b09f34305403a7628354433fcb6094a5dd0e1d66ed9f79"
	Jun 05 18:35:53 addons-191833 kubelet[2661]: I0605 18:35:53.915284    2661 scope.go:117] "RemoveContainer" containerID="96aecb94ea44e8dfc89f2e83c540f2c65fd42f90e36d4d38edda3025b575330d"
	Jun 05 18:35:53 addons-191833 kubelet[2661]: E0605 18:35:53.915540    2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-dr2vr_gadget(21e9ad53-c034-4ab2-ba29-8ada1a3fe6f6)\"" pod="gadget/gadget-dr2vr" podUID="21e9ad53-c034-4ab2-ba29-8ada1a3fe6f6"
	Jun 05 18:35:54 addons-191833 kubelet[2661]: I0605 18:35:54.925677    2661 scope.go:117] "RemoveContainer" containerID="96aecb94ea44e8dfc89f2e83c540f2c65fd42f90e36d4d38edda3025b575330d"
	Jun 05 18:35:54 addons-191833 kubelet[2661]: E0605 18:35:54.925927    2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-dr2vr_gadget(21e9ad53-c034-4ab2-ba29-8ada1a3fe6f6)\"" pod="gadget/gadget-dr2vr" podUID="21e9ad53-c034-4ab2-ba29-8ada1a3fe6f6"
	Jun 05 18:35:59 addons-191833 kubelet[2661]: I0605 18:35:59.641345    2661 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-mq24t" secret="" err="secret \"gcp-auth\" not found"
	Jun 05 18:36:06 addons-191833 kubelet[2661]: I0605 18:36:06.629977    2661 scope.go:117] "RemoveContainer" containerID="96aecb94ea44e8dfc89f2e83c540f2c65fd42f90e36d4d38edda3025b575330d"
	Jun 05 18:36:06 addons-191833 kubelet[2661]: E0605 18:36:06.630183    2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-dr2vr_gadget(21e9ad53-c034-4ab2-ba29-8ada1a3fe6f6)\"" pod="gadget/gadget-dr2vr" podUID="21e9ad53-c034-4ab2-ba29-8ada1a3fe6f6"
	Jun 05 18:36:11 addons-191833 kubelet[2661]: E0605 18:36:11.263050    2661 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Jun 05 18:36:11 addons-191833 kubelet[2661]: E0605 18:36:11.263170    2661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e782afc-a232-44d7-801a-bbdd9acbe3bb-gcr-creds podName:5e782afc-a232-44d7-801a-bbdd9acbe3bb nodeName:}" failed. No retries permitted until 2025-06-05 18:38:13.263136437 +0000 UTC m=+381.845279342 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/5e782afc-a232-44d7-801a-bbdd9acbe3bb-gcr-creds") pod "registry-creds-6b69cdcdd5-glqcp" (UID: "5e782afc-a232-44d7-801a-bbdd9acbe3bb") : secret "registry-creds-gcr" not found
	Jun 05 18:36:11 addons-191833 kubelet[2661]: I0605 18:36:11.631278    2661 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-kg7z2" secret="" err="secret \"gcp-auth\" not found"
	Jun 05 18:36:12 addons-191833 kubelet[2661]: I0605 18:36:12.630453    2661 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-694bd45846-k78hm" secret="" err="secret \"gcp-auth\" not found"
	Jun 05 18:36:17 addons-191833 kubelet[2661]: I0605 18:36:17.630293    2661 scope.go:117] "RemoveContainer" containerID="96aecb94ea44e8dfc89f2e83c540f2c65fd42f90e36d4d38edda3025b575330d"
	Jun 05 18:36:17 addons-191833 kubelet[2661]: E0605 18:36:17.630595    2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-dr2vr_gadget(21e9ad53-c034-4ab2-ba29-8ada1a3fe6f6)\"" pod="gadget/gadget-dr2vr" podUID="21e9ad53-c034-4ab2-ba29-8ada1a3fe6f6"
	Jun 05 18:36:19 addons-191833 kubelet[2661]: E0605 18:36:19.631120    2661 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-6b69cdcdd5-glqcp" podUID="5e782afc-a232-44d7-801a-bbdd9acbe3bb"
	Jun 05 18:36:26 addons-191833 kubelet[2661]: I0605 18:36:26.630143    2661 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-6sz4x" secret="" err="secret \"gcp-auth\" not found"
	Jun 05 18:36:31 addons-191833 kubelet[2661]: I0605 18:36:31.633123    2661 scope.go:117] "RemoveContainer" containerID="96aecb94ea44e8dfc89f2e83c540f2c65fd42f90e36d4d38edda3025b575330d"
	Jun 05 18:36:31 addons-191833 kubelet[2661]: E0605 18:36:31.634016    2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-dr2vr_gadget(21e9ad53-c034-4ab2-ba29-8ada1a3fe6f6)\"" pod="gadget/gadget-dr2vr" podUID="21e9ad53-c034-4ab2-ba29-8ada1a3fe6f6"
	Jun 05 18:36:45 addons-191833 kubelet[2661]: I0605 18:36:45.630453    2661 scope.go:117] "RemoveContainer" containerID="96aecb94ea44e8dfc89f2e83c540f2c65fd42f90e36d4d38edda3025b575330d"
	Jun 05 18:36:45 addons-191833 kubelet[2661]: E0605 18:36:45.630732    2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-dr2vr_gadget(21e9ad53-c034-4ab2-ba29-8ada1a3fe6f6)\"" pod="gadget/gadget-dr2vr" podUID="21e9ad53-c034-4ab2-ba29-8ada1a3fe6f6"
	Jun 05 18:36:59 addons-191833 kubelet[2661]: I0605 18:36:59.630050    2661 scope.go:117] "RemoveContainer" containerID="96aecb94ea44e8dfc89f2e83c540f2c65fd42f90e36d4d38edda3025b575330d"
	Jun 05 18:36:59 addons-191833 kubelet[2661]: E0605 18:36:59.630284    2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-dr2vr_gadget(21e9ad53-c034-4ab2-ba29-8ada1a3fe6f6)\"" pod="gadget/gadget-dr2vr" podUID="21e9ad53-c034-4ab2-ba29-8ada1a3fe6f6"
	Jun 05 18:37:10 addons-191833 kubelet[2661]: I0605 18:37:10.630683    2661 scope.go:117] "RemoveContainer" containerID="96aecb94ea44e8dfc89f2e83c540f2c65fd42f90e36d4d38edda3025b575330d"
	Jun 05 18:37:10 addons-191833 kubelet[2661]: E0605 18:37:10.630873    2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-dr2vr_gadget(21e9ad53-c034-4ab2-ba29-8ada1a3fe6f6)\"" pod="gadget/gadget-dr2vr" podUID="21e9ad53-c034-4ab2-ba29-8ada1a3fe6f6"
	
	
	==> storage-provisioner [5498ab0fdb9d] <==
	W0605 18:36:46.942158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:36:48.944708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:36:48.948708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:36:50.951521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:36:50.955384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:36:52.958179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:36:52.961673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:36:54.964175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:36:54.967803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:36:56.970046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:36:56.973642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:36:58.976231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:36:58.979788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:37:00.982295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:37:00.986690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:37:02.989518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:37:02.993303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:37:04.996229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:37:05.000002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:37:07.003058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:37:07.007376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:37:09.009853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:37:09.014776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:37:11.018219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0605 18:37:11.021906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-191833 -n addons-191833
helpers_test.go:261: (dbg) Run:  kubectl --context addons-191833 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-g2zqd ingress-nginx-admission-patch-frjxm registry-creds-6b69cdcdd5-glqcp
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-191833 describe pod ingress-nginx-admission-create-g2zqd ingress-nginx-admission-patch-frjxm registry-creds-6b69cdcdd5-glqcp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-191833 describe pod ingress-nginx-admission-create-g2zqd ingress-nginx-admission-patch-frjxm registry-creds-6b69cdcdd5-glqcp: exit status 1 (60.891142ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-g2zqd" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-frjxm" not found
	Error from server (NotFound): pods "registry-creds-6b69cdcdd5-glqcp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-191833 describe pod ingress-nginx-admission-create-g2zqd ingress-nginx-admission-patch-frjxm registry-creds-6b69cdcdd5-glqcp: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-191833 addons disable volcano --alsologtostderr -v=1
--- FAIL: TestAddons/serial/Volcano (197.81s)

                                                
                                    

Test pass (324/347)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 5.14
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.33.1/json-events 4.13
13 TestDownloadOnly/v1.33.1/preload-exists 0
17 TestDownloadOnly/v1.33.1/LogsDuration 0.06
18 TestDownloadOnly/v1.33.1/DeleteAll 0.19
19 TestDownloadOnly/v1.33.1/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 1.01
21 TestBinaryMirror 0.77
22 TestOffline 80.27
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 163.85
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 8.44
35 TestAddons/parallel/Registry 14.71
36 TestAddons/parallel/RegistryCreds 0.53
37 TestAddons/parallel/Ingress 16.61
38 TestAddons/parallel/InspektorGadget 11.88
39 TestAddons/parallel/MetricsServer 5.55
41 TestAddons/parallel/CSI 63.68
42 TestAddons/parallel/Headlamp 17.38
43 TestAddons/parallel/CloudSpanner 5.63
44 TestAddons/parallel/LocalPath 54.61
45 TestAddons/parallel/NvidiaDevicePlugin 5.57
46 TestAddons/parallel/Yakd 11.6
47 TestAddons/parallel/AmdGpuDevicePlugin 6.41
48 TestAddons/StoppedEnableDisable 11.09
49 TestCertOptions 27.13
50 TestCertExpiration 232.84
51 TestDockerFlags 30.11
52 TestForceSystemdFlag 33.54
53 TestForceSystemdEnv 29.75
55 TestKVMDriverInstallOrUpdate 1.34
59 TestErrorSpam/setup 25.41
60 TestErrorSpam/start 0.56
61 TestErrorSpam/status 0.84
62 TestErrorSpam/pause 1.14
63 TestErrorSpam/unpause 1.34
64 TestErrorSpam/stop 10.82
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 62.65
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 32.72
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.07
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.07
76 TestFunctional/serial/CacheCmd/cache/add_local 0.7
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.24
81 TestFunctional/serial/CacheCmd/cache/delete 0.09
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 37.03
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 0.94
87 TestFunctional/serial/LogsFileCmd 0.94
88 TestFunctional/serial/InvalidService 4.17
90 TestFunctional/parallel/ConfigCmd 0.4
91 TestFunctional/parallel/DashboardCmd 8.34
92 TestFunctional/parallel/DryRun 0.34
93 TestFunctional/parallel/InternationalLanguage 0.17
94 TestFunctional/parallel/StatusCmd 0.88
98 TestFunctional/parallel/ServiceCmdConnect 17.66
99 TestFunctional/parallel/AddonsCmd 0.14
100 TestFunctional/parallel/PersistentVolumeClaim 35.96
102 TestFunctional/parallel/SSHCmd 0.59
103 TestFunctional/parallel/CpCmd 1.81
104 TestFunctional/parallel/MySQL 24.21
105 TestFunctional/parallel/FileSync 0.3
106 TestFunctional/parallel/CertSync 1.85
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.32
114 TestFunctional/parallel/License 0.19
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/Version/components 0.57
118 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.47
119 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
120 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
121 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
122 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
123 TestFunctional/parallel/ImageCommands/ImageBuild 3.11
124 TestFunctional/parallel/ImageCommands/Setup 0.44
125 TestFunctional/parallel/DockerEnv/bash 1.08
126 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
127 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
128 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.28
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.11
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.02
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.2
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.8
139 TestFunctional/parallel/ProfileCmd/profile_list 0.45
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.5
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
148 TestFunctional/parallel/ServiceCmd/DeployApp 14.16
149 TestFunctional/parallel/ServiceCmd/List 0.92
150 TestFunctional/parallel/MountCmd/any-port 7.6
151 TestFunctional/parallel/ServiceCmd/JSONOutput 0.88
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
153 TestFunctional/parallel/ServiceCmd/Format 0.52
154 TestFunctional/parallel/ServiceCmd/URL 0.52
155 TestFunctional/parallel/MountCmd/specific-port 1.9
156 TestFunctional/parallel/MountCmd/VerifyCleanup 1.74
157 TestFunctional/delete_echo-server_images 0.04
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.01
164 TestMultiControlPlane/serial/StartCluster 94.66
165 TestMultiControlPlane/serial/DeployApp 38.08
166 TestMultiControlPlane/serial/PingHostFromPods 1.03
167 TestMultiControlPlane/serial/AddWorkerNode 12.94
168 TestMultiControlPlane/serial/NodeLabels 0.08
169 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.94
170 TestMultiControlPlane/serial/CopyFile 15.75
171 TestMultiControlPlane/serial/StopSecondaryNode 11.42
172 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
173 TestMultiControlPlane/serial/RestartSecondaryNode 39.04
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
175 TestMultiControlPlane/serial/RestartClusterKeepsNodes 175.61
176 TestMultiControlPlane/serial/DeleteSecondaryNode 9.25
177 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
178 TestMultiControlPlane/serial/StopCluster 32.07
179 TestMultiControlPlane/serial/RestartCluster 64.66
180 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
181 TestMultiControlPlane/serial/AddSecondaryNode 29.15
182 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.98
185 TestImageBuild/serial/Setup 26.16
186 TestImageBuild/serial/NormalBuild 0.97
187 TestImageBuild/serial/BuildWithBuildArg 0.64
188 TestImageBuild/serial/BuildWithDockerIgnore 0.44
189 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.48
193 TestJSONOutput/start/Command 36.14
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Command 0.49
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Command 0.43
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 10.78
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.2
218 TestKicCustomNetwork/create_custom_network 28.35
219 TestKicCustomNetwork/use_default_bridge_network 25.72
220 TestKicExistingNetwork 22.57
221 TestKicCustomSubnet 24.87
222 TestKicStaticIP 24.79
223 TestMainNoArgs 0.04
224 TestMinikubeProfile 54.34
227 TestMountStart/serial/StartWithMountFirst 6.63
228 TestMountStart/serial/VerifyMountFirst 0.24
229 TestMountStart/serial/StartWithMountSecond 6.66
230 TestMountStart/serial/VerifyMountSecond 0.23
231 TestMountStart/serial/DeleteFirst 1.45
232 TestMountStart/serial/VerifyMountPostDelete 0.24
233 TestMountStart/serial/Stop 1.17
234 TestMountStart/serial/RestartStopped 7.61
235 TestMountStart/serial/VerifyMountPostStop 0.24
238 TestMultiNode/serial/FreshStart2Nodes 45.64
239 TestMultiNode/serial/DeployApp2Nodes 36.56
240 TestMultiNode/serial/PingHostFrom2Pods 0.72
241 TestMultiNode/serial/AddNode 12.81
242 TestMultiNode/serial/MultiNodeLabels 0.06
243 TestMultiNode/serial/ProfileList 0.65
244 TestMultiNode/serial/CopyFile 8.99
245 TestMultiNode/serial/StopNode 2.08
246 TestMultiNode/serial/StartAfterStop 7.9
247 TestMultiNode/serial/RestartKeepsNodes 73.39
248 TestMultiNode/serial/DeleteNode 5.11
249 TestMultiNode/serial/StopMultiNode 21.46
250 TestMultiNode/serial/RestartMultiNode 54.05
251 TestMultiNode/serial/ValidateNameConflict 24.87
256 TestPreload 98.37
258 TestScheduledStopUnix 98.99
259 TestSkaffold 99.5
261 TestInsufficientStorage 9.93
262 TestRunningBinaryUpgrade 61.9
264 TestKubernetesUpgrade 338.61
265 TestMissingContainerUpgrade 132.77
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
268 TestNoKubernetes/serial/StartWithK8s 36.32
269 TestNoKubernetes/serial/StartWithStopK8s 16.19
270 TestNoKubernetes/serial/Start 9.21
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
272 TestNoKubernetes/serial/ProfileList 20.77
273 TestNoKubernetes/serial/Stop 2.98
274 TestNoKubernetes/serial/StartNoArgs 7.03
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
276 TestStoppedBinaryUpgrade/Setup 0.41
277 TestStoppedBinaryUpgrade/Upgrade 72.29
286 TestPause/serial/Start 73.32
287 TestStoppedBinaryUpgrade/MinikubeLogs 1.32
299 TestPause/serial/SecondStartNoReconfiguration 34.31
300 TestPause/serial/Pause 0.61
301 TestPause/serial/VerifyStatus 0.35
302 TestPause/serial/Unpause 0.46
303 TestPause/serial/PauseAgain 0.61
304 TestPause/serial/DeletePaused 2.12
305 TestPause/serial/VerifyDeletedResources 15.42
307 TestStartStop/group/old-k8s-version/serial/FirstStart 102.77
309 TestStartStop/group/no-preload/serial/FirstStart 79.7
311 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 70.3
312 TestStartStop/group/no-preload/serial/DeployApp 10.33
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.85
314 TestStartStop/group/no-preload/serial/Stop 10.72
315 TestStartStop/group/old-k8s-version/serial/DeployApp 9.43
316 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
317 TestStartStop/group/no-preload/serial/SecondStart 51.41
318 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.89
319 TestStartStop/group/old-k8s-version/serial/Stop 11.03
320 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
321 TestStartStop/group/old-k8s-version/serial/SecondStart 101.48
323 TestStartStop/group/newest-cni/serial/FirstStart 33.39
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.28
325 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.84
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.02
328 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
329 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
330 TestStartStop/group/no-preload/serial/Pause 2.57
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.27
332 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 54.76
334 TestStartStop/group/embed-certs/serial/FirstStart 66.76
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.14
337 TestStartStop/group/newest-cni/serial/Stop 11.1
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
339 TestStartStop/group/newest-cni/serial/SecondStart 15.37
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
343 TestStartStop/group/newest-cni/serial/Pause 2.49
344 TestNetworkPlugins/group/auto/Start 62.94
345 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
346 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
347 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
348 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
349 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
350 TestStartStop/group/old-k8s-version/serial/Pause 2.54
351 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
352 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.92
353 TestStartStop/group/embed-certs/serial/DeployApp 10.33
354 TestNetworkPlugins/group/kindnet/Start 58.99
355 TestNetworkPlugins/group/calico/Start 58.28
356 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.18
357 TestStartStop/group/embed-certs/serial/Stop 10.7
358 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
359 TestStartStop/group/embed-certs/serial/SecondStart 51.23
360 TestNetworkPlugins/group/auto/KubeletFlags 0.41
361 TestNetworkPlugins/group/auto/NetCatPod 10.28
362 TestNetworkPlugins/group/auto/DNS 0.16
363 TestNetworkPlugins/group/auto/Localhost 0.15
364 TestNetworkPlugins/group/auto/HairPin 0.12
365 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
368 TestNetworkPlugins/group/kindnet/NetCatPod 11.2
369 TestNetworkPlugins/group/custom-flannel/Start 51.38
370 TestNetworkPlugins/group/calico/KubeletFlags 0.3
371 TestNetworkPlugins/group/calico/NetCatPod 12.28
372 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
373 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
374 TestNetworkPlugins/group/kindnet/DNS 0.13
375 TestNetworkPlugins/group/kindnet/Localhost 0.12
376 TestNetworkPlugins/group/kindnet/HairPin 0.14
377 TestNetworkPlugins/group/calico/DNS 0.14
378 TestNetworkPlugins/group/calico/Localhost 0.12
379 TestNetworkPlugins/group/calico/HairPin 0.12
380 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
381 TestStartStop/group/embed-certs/serial/Pause 2.68
382 TestNetworkPlugins/group/false/Start 71.99
383 TestNetworkPlugins/group/enable-default-cni/Start 70.56
384 TestNetworkPlugins/group/flannel/Start 47.46
385 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
386 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.21
387 TestNetworkPlugins/group/custom-flannel/DNS 0.17
388 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
389 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
390 TestNetworkPlugins/group/bridge/Start 67.74
391 TestNetworkPlugins/group/flannel/ControllerPod 6.01
392 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
393 TestNetworkPlugins/group/flannel/NetCatPod 10.18
394 TestNetworkPlugins/group/false/KubeletFlags 0.28
395 TestNetworkPlugins/group/false/NetCatPod 10.2
396 TestNetworkPlugins/group/flannel/DNS 0.15
397 TestNetworkPlugins/group/flannel/Localhost 0.14
398 TestNetworkPlugins/group/flannel/HairPin 0.12
399 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
400 TestNetworkPlugins/group/false/DNS 0.15
401 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.26
402 TestNetworkPlugins/group/false/Localhost 0.18
403 TestNetworkPlugins/group/false/HairPin 0.14
404 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
405 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
406 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
407 TestNetworkPlugins/group/kubenet/Start 67.43
408 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
409 TestNetworkPlugins/group/bridge/NetCatPod 9.19
410 TestNetworkPlugins/group/bridge/DNS 0.13
411 TestNetworkPlugins/group/bridge/Localhost 0.11
412 TestNetworkPlugins/group/bridge/HairPin 0.1
413 TestNetworkPlugins/group/kubenet/KubeletFlags 0.25
414 TestNetworkPlugins/group/kubenet/NetCatPod 9.18
415 TestNetworkPlugins/group/kubenet/DNS 0.19
416 TestNetworkPlugins/group/kubenet/Localhost 0.1
417 TestNetworkPlugins/group/kubenet/HairPin 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (5.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-810688 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-810688 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.140622484s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (5.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0605 18:31:04.569867   13279 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0605 18:31:04.569939   13279 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20889-6302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-810688
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-810688: exit status 85 (58.866613ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-810688 | jenkins | v1.36.0 | 05 Jun 25 18:30 UTC |          |
	|         | -p download-only-810688        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/05 18:30:59
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0605 18:30:59.468342   13291 out.go:345] Setting OutFile to fd 1 ...
	I0605 18:30:59.468622   13291 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0605 18:30:59.468632   13291 out.go:358] Setting ErrFile to fd 2...
	I0605 18:30:59.468636   13291 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0605 18:30:59.468805   13291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20889-6302/.minikube/bin
	W0605 18:30:59.468936   13291 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20889-6302/.minikube/config/config.json: open /home/jenkins/minikube-integration/20889-6302/.minikube/config/config.json: no such file or directory
	I0605 18:30:59.469488   13291 out.go:352] Setting JSON to true
	I0605 18:30:59.470414   13291 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":806,"bootTime":1749147453,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0605 18:30:59.470511   13291 start.go:140] virtualization: kvm guest
	I0605 18:30:59.472723   13291 out.go:97] [download-only-810688] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	W0605 18:30:59.472831   13291 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20889-6302/.minikube/cache/preloaded-tarball: no such file or directory
	I0605 18:30:59.472881   13291 notify.go:220] Checking for updates...
	I0605 18:30:59.474194   13291 out.go:169] MINIKUBE_LOCATION=20889
	I0605 18:30:59.475527   13291 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0605 18:30:59.476773   13291 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20889-6302/kubeconfig
	I0605 18:30:59.478214   13291 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20889-6302/.minikube
	I0605 18:30:59.479460   13291 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0605 18:30:59.481578   13291 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0605 18:30:59.481846   13291 driver.go:404] Setting default libvirt URI to qemu:///system
	I0605 18:30:59.505024   13291 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0605 18:30:59.505094   13291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:30:59.881719   13291 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-06-05 18:30:59.871480877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0605 18:30:59.881807   13291 docker.go:318] overlay module found
	I0605 18:30:59.883383   13291 out.go:97] Using the docker driver based on user configuration
	I0605 18:30:59.883404   13291 start.go:304] selected driver: docker
	I0605 18:30:59.883409   13291 start.go:908] validating driver "docker" against <nil>
	I0605 18:30:59.883515   13291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:30:59.931737   13291 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-06-05 18:30:59.922860178 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0605 18:30:59.931892   13291 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0605 18:30:59.932431   13291 start_flags.go:408] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0605 18:30:59.932638   13291 start_flags.go:972] Wait components to verify : map[apiserver:true system_pods:true]
	I0605 18:30:59.934526   13291 out.go:169] Using Docker driver with root privileges
	I0605 18:30:59.935727   13291 cni.go:84] Creating CNI manager for ""
	I0605 18:30:59.935801   13291 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0605 18:30:59.935869   13291 start.go:347] cluster config:
	{Name:download-only-810688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-810688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0605 18:30:59.937388   13291 out.go:97] Starting "download-only-810688" primary control-plane node in "download-only-810688" cluster
	I0605 18:30:59.937403   13291 cache.go:121] Beginning downloading kic base image for docker with docker
	I0605 18:30:59.938536   13291 out.go:97] Pulling base image v0.0.47 ...
	I0605 18:30:59.938565   13291 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0605 18:30:59.938655   13291 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b in local docker daemon
	I0605 18:30:59.954392   13291 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b to local cache
	I0605 18:30:59.954556   13291 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b in local cache directory
	I0605 18:30:59.954677   13291 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b to local cache
	I0605 18:30:59.955349   13291 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0605 18:30:59.955374   13291 cache.go:56] Caching tarball of preloaded images
	I0605 18:30:59.955523   13291 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0605 18:30:59.957160   13291 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0605 18:30:59.957174   13291 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0605 18:30:59.986864   13291 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/20889-6302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0605 18:31:03.174405   13291 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0605 18:31:03.174491   13291 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20889-6302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0605 18:31:03.695278   13291 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b as a tarball
	I0605 18:31:04.023615   13291 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0605 18:31:04.023927   13291 profile.go:143] Saving config to /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/download-only-810688/config.json ...
	I0605 18:31:04.023956   13291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/download-only-810688/config.json: {Name:mk92a62ca5629b86d61d3544189029c34c8db102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 18:31:04.024121   13291 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0605 18:31:04.024290   13291 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20889-6302/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-810688 host does not exist
	  To start a cluster, run: "minikube start -p download-only-810688"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-810688
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.1/json-events (4.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-887660 --force --alsologtostderr --kubernetes-version=v1.33.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-887660 --force --alsologtostderr --kubernetes-version=v1.33.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.129903909s)
--- PASS: TestDownloadOnly/v1.33.1/json-events (4.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.1/preload-exists
I0605 18:31:09.086192   13279 preload.go:131] Checking if preload exists for k8s version v1.33.1 and runtime docker
I0605 18:31:09.086236   13279 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20889-6302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.33.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-887660
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-887660: exit status 85 (55.892255ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-810688 | jenkins | v1.36.0 | 05 Jun 25 18:30 UTC |                     |
	|         | -p download-only-810688        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.36.0 | 05 Jun 25 18:31 UTC | 05 Jun 25 18:31 UTC |
	| delete  | -p download-only-810688        | download-only-810688 | jenkins | v1.36.0 | 05 Jun 25 18:31 UTC | 05 Jun 25 18:31 UTC |
	| start   | -o=json --download-only        | download-only-887660 | jenkins | v1.36.0 | 05 Jun 25 18:31 UTC |                     |
	|         | -p download-only-887660        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/05 18:31:04
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0605 18:31:04.994522   13632 out.go:345] Setting OutFile to fd 1 ...
	I0605 18:31:04.994609   13632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0605 18:31:04.994620   13632 out.go:358] Setting ErrFile to fd 2...
	I0605 18:31:04.994624   13632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0605 18:31:04.994799   13632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20889-6302/.minikube/bin
	I0605 18:31:04.995333   13632 out.go:352] Setting JSON to true
	I0605 18:31:04.996127   13632 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":812,"bootTime":1749147453,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0605 18:31:04.996223   13632 start.go:140] virtualization: kvm guest
	I0605 18:31:04.998225   13632 out.go:97] [download-only-887660] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0605 18:31:04.998371   13632 notify.go:220] Checking for updates...
	I0605 18:31:04.999571   13632 out.go:169] MINIKUBE_LOCATION=20889
	I0605 18:31:05.000775   13632 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0605 18:31:05.001862   13632 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20889-6302/kubeconfig
	I0605 18:31:05.003000   13632 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20889-6302/.minikube
	I0605 18:31:05.004140   13632 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0605 18:31:05.006062   13632 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0605 18:31:05.006264   13632 driver.go:404] Setting default libvirt URI to qemu:///system
	I0605 18:31:05.027754   13632 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0605 18:31:05.027820   13632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:31:05.077404   13632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-06-05 18:31:05.068504957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0605 18:31:05.077538   13632 docker.go:318] overlay module found
	I0605 18:31:05.079666   13632 out.go:97] Using the docker driver based on user configuration
	I0605 18:31:05.079706   13632 start.go:304] selected driver: docker
	I0605 18:31:05.079718   13632 start.go:908] validating driver "docker" against <nil>
	I0605 18:31:05.079805   13632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:31:05.124656   13632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-06-05 18:31:05.11618718 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0605 18:31:05.124795   13632 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0605 18:31:05.125235   13632 start_flags.go:408] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0605 18:31:05.125394   13632 start_flags.go:972] Wait components to verify : map[apiserver:true system_pods:true]
	I0605 18:31:05.126890   13632 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-887660 host does not exist
	  To start a cluster, run: "minikube start -p download-only-887660"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.33.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.1/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.33.1/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-887660
--- PASS: TestDownloadOnly/v1.33.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.01s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-744836 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-744836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-744836
--- PASS: TestDownloadOnlyKic (1.01s)

                                                
                                    
x
+
TestBinaryMirror (0.77s)

                                                
                                                
=== RUN   TestBinaryMirror
I0605 18:31:10.721260   13279 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.33.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.33.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-722276 --alsologtostderr --binary-mirror http://127.0.0.1:39967 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-722276" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-722276
--- PASS: TestBinaryMirror (0.77s)

                                                
                                    
x
+
TestOffline (80.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-189968 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-189968 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m18.159103603s)
helpers_test.go:175: Cleaning up "offline-docker-189968" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-189968
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-189968: (2.110314323s)
--- PASS: TestOffline (80.27s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-191833
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-191833: exit status 85 (48.73343ms)

                                                
                                                
-- stdout --
	* Profile "addons-191833" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-191833"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-191833
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-191833: exit status 85 (49.021275ms)

                                                
                                                
-- stdout --
	* Profile "addons-191833" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-191833"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (163.85s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-191833 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-191833 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m43.854555403s)
--- PASS: TestAddons/Setup (163.85s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-191833 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-191833 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-191833 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-191833 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c231c18d-0556-4616-b4a3-fe47887d8df4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c231c18d-0556-4616-b4a3-fe47887d8df4] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.00346234s
addons_test.go:694: (dbg) Run:  kubectl --context addons-191833 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-191833 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-191833 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.44s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 2.859232ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-694bd45846-k78hm" [3fc306df-aca8-427f-9e3d-8f92e7b1cad6] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00192377s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-mq24t" [ad022fb2-e887-4f8e-a153-c311bd3ff71b] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003468886s
addons_test.go:392: (dbg) Run:  kubectl --context addons-191833 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-191833 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-191833 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.034121142s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-191833 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-191833 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.71s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.53s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 1.425718ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-191833
addons_test.go:332: (dbg) Run:  kubectl --context addons-191833 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-191833 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.53s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (16.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-191833 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-191833 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-191833 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ea4cd2c9-ca56-403e-9519-a217d3e9f24d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ea4cd2c9-ca56-403e-9519-a217d3e9f24d] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.003042006s
I0605 18:38:10.872694   13279 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-191833 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-191833 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-191833 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-191833 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-191833 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-191833 addons disable ingress --alsologtostderr -v=1: (7.580899768s)
--- PASS: TestAddons/parallel/Ingress (16.61s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dr2vr" [21e9ad53-c034-4ab2-ba29-8ada1a3fe6f6] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.064673949s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-191833 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-191833 addons disable inspektor-gadget --alsologtostderr -v=1: (5.815937469s)
--- PASS: TestAddons/parallel/InspektorGadget (11.88s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 1.853889ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-5m64w" [96f6d999-ab98-4e15-81fd-bef3cfec2f61] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003206877s
addons_test.go:463: (dbg) Run:  kubectl --context addons-191833 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-191833 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.55s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: csi-hostpath-driver pods stabilized in 2.777249ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-191833 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc -o jsonpath={.status.phase} -n default
2025/06/05 18:37:45 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-191833 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2e703e9e-9401-4116-95a5-bdbb40fc3432] Pending
helpers_test.go:344: "task-pv-pod" [2e703e9e-9401-4116-95a5-bdbb40fc3432] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2e703e9e-9401-4116-95a5-bdbb40fc3432] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003370602s
addons_test.go:572: (dbg) Run:  kubectl --context addons-191833 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-191833 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-191833 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-191833 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-191833 delete pod task-pv-pod: (1.267521241s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-191833 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-191833 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-191833 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [cc69e099-87b9-40f2-876d-69e0ba8e5611] Pending
helpers_test.go:344: "task-pv-pod-restore" [cc69e099-87b9-40f2-876d-69e0ba8e5611] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.002737313s
addons_test.go:614: (dbg) Run:  kubectl --context addons-191833 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-191833 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-191833 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-191833 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-191833 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-191833 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.501235404s)
--- PASS: TestAddons/parallel/CSI (63.68s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-191833 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-snsb5" [29a51c34-75c6-4324-bfe2-0c05dcd08713] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-snsb5" [29a51c34-75c6-4324-bfe2-0c05dcd08713] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003569782s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-191833 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-191833 addons disable headlamp --alsologtostderr -v=1: (5.586605436s)
--- PASS: TestAddons/parallel/Headlamp (17.38s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-694f8b9456-4nxg6" [9cf5f8fe-a492-4a10-a700-0437feb45826] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002996007s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-191833 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.61s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-191833 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-191833 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191833 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [78dc2a62-1108-4072-a01e-95f838a6e956] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [78dc2a62-1108-4072-a01e-95f838a6e956] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [78dc2a62-1108-4072-a01e-95f838a6e956] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.003153179s
addons_test.go:967: (dbg) Run:  kubectl --context addons-191833 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-191833 ssh "cat /opt/local-path-provisioner/pvc-a11c31bf-27fb-422f-86e7-e8fdc994ba8f_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-191833 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-191833 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-191833 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-191833 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.70348027s)
--- PASS: TestAddons/parallel/LocalPath (54.61s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6sz4x" [b05d034d-94e5-4669-a212-ca20ff817f1b] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.080144815s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-191833 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
I0605 18:37:31.355682   13279 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-b5wsb" [1a5cea3d-129f-4f69-bd99-851b0faa58ff] Running
I0605 18:37:31.358417   13279 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0605 18:37:31.358438   13279 kapi.go:107] duration metric: took 2.76826ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002852719s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-191833 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-191833 addons disable yakd --alsologtostderr -v=1: (5.59741034s)
--- PASS: TestAddons/parallel/Yakd (11.60s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.41s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-kg7z2" [72a513ae-8981-48d4-ab13-b4d8f6e6efed] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003952751s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-191833 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.09s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-191833
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-191833: (10.850839786s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-191833
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-191833
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-191833
--- PASS: TestAddons/StoppedEnableDisable (11.09s)

                                                
                                    
x
+
TestCertOptions (27.13s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-984256 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-984256 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (23.756255744s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-984256 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-984256 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-984256 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-984256" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-984256
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-984256: (2.81717479s)
--- PASS: TestCertOptions (27.13s)

                                                
                                    
x
+
TestCertExpiration (232.84s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-307905 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-307905 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (28.390600554s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-307905 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-307905 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (22.302305178s)
helpers_test.go:175: Cleaning up "cert-expiration-307905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-307905
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-307905: (2.148178529s)
--- PASS: TestCertExpiration (232.84s)

                                                
                                    
x
+
TestDockerFlags (30.11s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-048914 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-048914 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (27.411876341s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-048914 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-048914 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-048914" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-048914
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-048914: (2.126313351s)
--- PASS: TestDockerFlags (30.11s)

                                                
                                    
x
+
TestForceSystemdFlag (33.54s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-197555 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0605 19:08:55.397210   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-197555 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (30.995976937s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-197555 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-197555" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-197555
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-197555: (2.235113555s)
--- PASS: TestForceSystemdFlag (33.54s)

                                                
                                    
x
+
TestForceSystemdEnv (29.75s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-325507 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-325507 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (27.261551141s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-325507 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-325507" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-325507
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-325507: (2.139739542s)
--- PASS: TestForceSystemdEnv (29.75s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.34s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0605 19:09:23.654515   13279 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0605 19:09:23.654688   13279 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0605 19:09:23.683059   13279 install.go:62] docker-machine-driver-kvm2: exit status 1
W0605 19:09:23.683276   13279 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0605 19:09:23.683365   13279 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate20082417/001/docker-machine-driver-kvm2
I0605 19:09:23.839927   13279 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate20082417/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x57df6e0 0x57df6e0 0x57df6e0 0x57df6e0 0x57df6e0 0x57df6e0 0x57df6e0] Decompressors:map[bz2:0xc000012860 gz:0xc000012868 tar:0xc000012810 tar.bz2:0xc000012820 tar.gz:0xc000012830 tar.xz:0xc000012840 tar.zst:0xc000012850 tbz2:0xc000012820 tgz:0xc000012830 txz:0xc000012840 tzst:0xc000012850 xz:0xc000012870 zip:0xc000012890 zst:0xc000012878] Getters:map[file:0xc0021789e0 http:0xc00098a870 https:0xc00098a8c0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code
: 404. trying to get the common version
I0605 19:09:23.839974   13279 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate20082417/001/docker-machine-driver-kvm2
I0605 19:09:24.419249   13279 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0605 19:09:24.419339   13279 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0605 19:09:24.446792   13279 install.go:137] /home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0605 19:09:24.446821   13279 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0605 19:09:24.446881   13279 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0605 19:09:24.446907   13279 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate20082417/002/docker-machine-driver-kvm2
I0605 19:09:24.470293   13279 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate20082417/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x57df6e0 0x57df6e0 0x57df6e0 0x57df6e0 0x57df6e0 0x57df6e0 0x57df6e0] Decompressors:map[bz2:0xc000012860 gz:0xc000012868 tar:0xc000012810 tar.bz2:0xc000012820 tar.gz:0xc000012830 tar.xz:0xc000012840 tar.zst:0xc000012850 tbz2:0xc000012820 tgz:0xc000012830 txz:0xc000012840 tzst:0xc000012850 xz:0xc000012870 zip:0xc000012890 zst:0xc000012878] Getters:map[file:0xc00200abd0 http:0xc0004d5040 https:0xc0004d5130] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code
: 404. trying to get the common version
I0605 19:09:24.470342   13279 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate20082417/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.34s)

                                                
                                    
x
+
TestErrorSpam/setup (25.41s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-490560 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-490560 --driver=docker  --container-runtime=docker
E0605 18:38:55.399567   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:38:55.406045   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:38:55.417489   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:38:55.438833   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:38:55.480329   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:38:55.561860   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:38:55.723493   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:38:56.045305   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:38:56.686867   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:38:57.968439   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:39:00.531342   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:39:05.652739   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:39:15.894888   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-490560 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-490560 --driver=docker  --container-runtime=docker: (25.405797933s)
--- PASS: TestErrorSpam/setup (25.41s)

                                                
                                    
x
+
TestErrorSpam/start (0.56s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490560 --log_dir /tmp/nospam-490560 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490560 --log_dir /tmp/nospam-490560 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490560 --log_dir /tmp/nospam-490560 start --dry-run
--- PASS: TestErrorSpam/start (0.56s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490560 --log_dir /tmp/nospam-490560 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490560 --log_dir /tmp/nospam-490560 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490560 --log_dir /tmp/nospam-490560 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490560 --log_dir /tmp/nospam-490560 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490560 --log_dir /tmp/nospam-490560 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490560 --log_dir /tmp/nospam-490560 pause
--- PASS: TestErrorSpam/pause (1.14s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490560 --log_dir /tmp/nospam-490560 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490560 --log_dir /tmp/nospam-490560 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490560 --log_dir /tmp/nospam-490560 unpause
--- PASS: TestErrorSpam/unpause (1.34s)

                                                
                                    
x
+
TestErrorSpam/stop (10.82s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490560 --log_dir /tmp/nospam-490560 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-490560 --log_dir /tmp/nospam-490560 stop: (10.647491799s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490560 --log_dir /tmp/nospam-490560 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490560 --log_dir /tmp/nospam-490560 stop
--- PASS: TestErrorSpam/stop (10.82s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20889-6302/.minikube/files/etc/test/nested/copy/13279/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (62.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-390168 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
E0605 18:39:36.377109   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:40:17.339302   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-390168 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m2.651930105s)
--- PASS: TestFunctional/serial/StartWithProxy (62.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.72s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0605 18:40:35.627339   13279 config.go:182] Loaded profile config "functional-390168": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-390168 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-390168 --alsologtostderr -v=8: (32.714806194s)
functional_test.go:680: soft start took 32.715534234s for "functional-390168" cluster.
I0605 18:41:08.342475   13279 config.go:182] Loaded profile config "functional-390168": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
--- PASS: TestFunctional/serial/SoftStart (32.72s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-390168 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-390168 /tmp/TestFunctionalserialCacheCmdcacheadd_local1008080096/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 cache add minikube-local-cache-test:functional-390168
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 cache delete minikube-local-cache-test:functional-390168
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-390168
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-390168 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (261.393458ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 kubectl -- --context functional-390168 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-390168 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.03s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-390168 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0605 18:41:39.262792   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-390168 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.026616875s)
functional_test.go:778: restart took 37.026790414s for "functional-390168" cluster.
I0605 18:41:50.165623   13279 config.go:182] Loaded profile config "functional-390168": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
--- PASS: TestFunctional/serial/ExtraConfig (37.03s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-390168 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 logs
--- PASS: TestFunctional/serial/LogsCmd (0.94s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 logs --file /tmp/TestFunctionalserialLogsFileCmd1536465695/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.94s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.17s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-390168 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-390168
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-390168: exit status 115 (312.563957ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30972 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-390168 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.17s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-390168 config get cpus: exit status 14 (80.381919ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-390168 config get cpus: exit status 14 (61.116159ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-390168 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-390168 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 66768: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.34s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-390168 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-390168 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (146.113748ms)

                                                
                                                
-- stdout --
	* [functional-390168] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20889
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20889-6302/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20889-6302/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0605 18:42:23.197827   65727 out.go:345] Setting OutFile to fd 1 ...
	I0605 18:42:23.198085   65727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0605 18:42:23.198096   65727 out.go:358] Setting ErrFile to fd 2...
	I0605 18:42:23.198100   65727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0605 18:42:23.198320   65727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20889-6302/.minikube/bin
	I0605 18:42:23.198937   65727 out.go:352] Setting JSON to false
	I0605 18:42:23.200077   65727 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1490,"bootTime":1749147453,"procs":276,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0605 18:42:23.200184   65727 start.go:140] virtualization: kvm guest
	I0605 18:42:23.202141   65727 out.go:177] * [functional-390168] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0605 18:42:23.203549   65727 notify.go:220] Checking for updates...
	I0605 18:42:23.203562   65727 out.go:177]   - MINIKUBE_LOCATION=20889
	I0605 18:42:23.204679   65727 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0605 18:42:23.205801   65727 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20889-6302/kubeconfig
	I0605 18:42:23.207108   65727 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20889-6302/.minikube
	I0605 18:42:23.208322   65727 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0605 18:42:23.209474   65727 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0605 18:42:23.210940   65727 config.go:182] Loaded profile config "functional-390168": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
	I0605 18:42:23.211425   65727 driver.go:404] Setting default libvirt URI to qemu:///system
	I0605 18:42:23.236867   65727 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0605 18:42:23.237012   65727 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:42:23.287394   65727 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:55 SystemTime:2025-06-05 18:42:23.277911657 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0605 18:42:23.287518   65727 docker.go:318] overlay module found
	I0605 18:42:23.289177   65727 out.go:177] * Using the docker driver based on existing profile
	I0605 18:42:23.290291   65727 start.go:304] selected driver: docker
	I0605 18:42:23.290301   65727 start.go:908] validating driver "docker" against &{Name:functional-390168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.1 ClusterName:functional-390168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.33.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0605 18:42:23.290391   65727 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0605 18:42:23.292382   65727 out.go:201] 
	W0605 18:42:23.293466   65727 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0605 18:42:23.294665   65727 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-390168 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-390168 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-390168 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (165.248255ms)

                                                
                                                
-- stdout --
	* [functional-390168] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20889
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20889-6302/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20889-6302/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0605 18:42:20.417736   64294 out.go:345] Setting OutFile to fd 1 ...
	I0605 18:42:20.418306   64294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0605 18:42:20.418357   64294 out.go:358] Setting ErrFile to fd 2...
	I0605 18:42:20.418372   64294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0605 18:42:20.418986   64294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20889-6302/.minikube/bin
	I0605 18:42:20.419981   64294 out.go:352] Setting JSON to false
	I0605 18:42:20.421065   64294 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1487,"bootTime":1749147453,"procs":269,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0605 18:42:20.421149   64294 start.go:140] virtualization: kvm guest
	I0605 18:42:20.423506   64294 out.go:177] * [functional-390168] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0605 18:42:20.425278   64294 out.go:177]   - MINIKUBE_LOCATION=20889
	I0605 18:42:20.425313   64294 notify.go:220] Checking for updates...
	I0605 18:42:20.428304   64294 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0605 18:42:20.429763   64294 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20889-6302/kubeconfig
	I0605 18:42:20.431076   64294 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20889-6302/.minikube
	I0605 18:42:20.432454   64294 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0605 18:42:20.433690   64294 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0605 18:42:20.435370   64294 config.go:182] Loaded profile config "functional-390168": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
	I0605 18:42:20.435894   64294 driver.go:404] Setting default libvirt URI to qemu:///system
	I0605 18:42:20.462916   64294 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0605 18:42:20.463024   64294 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:42:20.520913   64294 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-06-05 18:42:20.511970243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0605 18:42:20.521022   64294 docker.go:318] overlay module found
	I0605 18:42:20.523195   64294 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0605 18:42:20.524560   64294 start.go:304] selected driver: docker
	I0605 18:42:20.524580   64294 start.go:908] validating driver "docker" against &{Name:functional-390168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.1 ClusterName:functional-390168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.33.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0605 18:42:20.524690   64294 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0605 18:42:20.527033   64294 out.go:201] 
	W0605 18:42:20.528195   64294 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0605 18:42:20.529363   64294 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (17.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-390168 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-390168 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-jjjjh" [6ded70b1-cbad-4a7e-8bb0-388d45b0b0c0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-jjjjh" [6ded70b1-cbad-4a7e-8bb0-388d45b0b0c0] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 17.003284493s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:32085
functional_test.go:1692: http://192.168.49.2:32085: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-jjjjh

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32085
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (17.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (35.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6873be67-c553-4657-837e-57f813ec67c0] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00365078s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-390168 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-390168 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-390168 get pvc myclaim -o=json
I0605 18:42:07.536702   13279 retry.go:31] will retry after 1.881009158s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:ddcc4d4a-7eed-4b70-9a52-01651f77bc84 ResourceVersion:765 Generation:0 CreationTimestamp:2025-06-05 18:42:07 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-ddcc4d4a-7eed-4b70-9a52-01651f77bc84 StorageClassName:0xc001bfb760 VolumeMode:0xc001bfb770 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-390168 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-390168 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [37ce2f7d-46c8-4939-9fa1-c658b1c1c487] Pending
helpers_test.go:344: "sp-pod" [37ce2f7d-46c8-4939-9fa1-c658b1c1c487] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [37ce2f7d-46c8-4939-9fa1-c658b1c1c487] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.003484902s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-390168 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-390168 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-390168 delete -f testdata/storage-provisioner/pod.yaml: (1.175860386s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-390168 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a1743b5b-f5f0-4145-b069-4df92c092f27] Pending
helpers_test.go:344: "sp-pod" [a1743b5b-f5f0-4145-b069-4df92c092f27] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a1743b5b-f5f0-4145-b069-4df92c092f27] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003380801s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-390168 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (35.96s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh -n functional-390168 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 cp functional-390168:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3491420530/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh -n functional-390168 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh -n functional-390168 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-390168 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-bkvb6" [e09a491a-8254-4ac5-92da-4275b69363e7] Pending
helpers_test.go:344: "mysql-58ccfd96bb-bkvb6" [e09a491a-8254-4ac5-92da-4275b69363e7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-bkvb6" [e09a491a-8254-4ac5-92da-4275b69363e7] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.002921399s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-390168 exec mysql-58ccfd96bb-bkvb6 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-390168 exec mysql-58ccfd96bb-bkvb6 -- mysql -ppassword -e "show databases;": exit status 1 (146.196071ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0605 18:42:17.436940   13279 retry.go:31] will retry after 1.094534625s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-390168 exec mysql-58ccfd96bb-bkvb6 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-390168 exec mysql-58ccfd96bb-bkvb6 -- mysql -ppassword -e "show databases;": exit status 1 (120.970145ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0605 18:42:18.653771   13279 retry.go:31] will retry after 949.352586ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-390168 exec mysql-58ccfd96bb-bkvb6 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-390168 exec mysql-58ccfd96bb-bkvb6 -- mysql -ppassword -e "show databases;": exit status 1 (101.075369ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0605 18:42:19.705232   13279 retry.go:31] will retry after 2.447339394s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-390168 exec mysql-58ccfd96bb-bkvb6 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.21s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/13279/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh "sudo cat /etc/test/nested/copy/13279/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/13279.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh "sudo cat /etc/ssl/certs/13279.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/13279.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh "sudo cat /usr/share/ca-certificates/13279.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/132792.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh "sudo cat /etc/ssl/certs/132792.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/132792.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh "sudo cat /usr/share/ca-certificates/132792.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-390168 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-390168 ssh "sudo systemctl is-active crio": exit status 1 (318.100073ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-390168 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-390168 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-390168 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 59796: os: process already finished
helpers_test.go:502: unable to terminate pid 59463: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-390168 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-390168 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.33.1
registry.k8s.io/kube-proxy:v1.33.1
registry.k8s.io/kube-controller-manager:v1.33.1
registry.k8s.io/kube-apiserver:v1.33.1
registry.k8s.io/etcd:3.5.21-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.12.0
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-390168
docker.io/kicbase/echo-server:functional-390168
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-390168 image ls --format short --alsologtostderr:
I0605 18:42:24.865152   66694 out.go:345] Setting OutFile to fd 1 ...
I0605 18:42:24.865440   66694 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0605 18:42:24.865449   66694 out.go:358] Setting ErrFile to fd 2...
I0605 18:42:24.865453   66694 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0605 18:42:24.865692   66694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20889-6302/.minikube/bin
I0605 18:42:24.866243   66694 config.go:182] Loaded profile config "functional-390168": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
I0605 18:42:24.866360   66694 config.go:182] Loaded profile config "functional-390168": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
I0605 18:42:24.866858   66694 cli_runner.go:164] Run: docker container inspect functional-390168 --format={{.State.Status}}
I0605 18:42:24.885883   66694 ssh_runner.go:195] Run: systemctl --version
I0605 18:42:24.885938   66694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-390168
I0605 18:42:24.903683   66694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/functional-390168/id_rsa Username:docker}
I0605 18:42:25.019461   66694 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-390168 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| localhost/my-image                          | functional-390168 | 41550e1487d12 | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-390168 | a9c8288f4ca48 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.33.1           | 398c985c0d950 | 73.4MB |
| registry.k8s.io/kube-apiserver              | v1.33.1           | c6ab243b29f82 | 102MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.33.1           | ef43894fa110c | 94.6MB |
| registry.k8s.io/etcd                        | 3.5.21-0          | 499038711c081 | 153MB  |
| registry.k8s.io/coredns/coredns             | v1.12.0           | 1cf5f116067c6 | 70.1MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kicbase/echo-server               | functional-390168 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-proxy                  | v1.33.1           | b79c189b052cd | 97.9MB |
| docker.io/library/nginx                     | latest            | be69f2940aaf6 | 192MB  |
| docker.io/library/nginx                     | alpine            | 6769dc3a703c7 | 48.2MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-390168 image ls --format table --alsologtostderr:
I0605 18:42:28.714208   68137 out.go:345] Setting OutFile to fd 1 ...
I0605 18:42:28.714427   68137 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0605 18:42:28.714435   68137 out.go:358] Setting ErrFile to fd 2...
I0605 18:42:28.714439   68137 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0605 18:42:28.714602   68137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20889-6302/.minikube/bin
I0605 18:42:28.715111   68137 config.go:182] Loaded profile config "functional-390168": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
I0605 18:42:28.715245   68137 config.go:182] Loaded profile config "functional-390168": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
I0605 18:42:28.715588   68137 cli_runner.go:164] Run: docker container inspect functional-390168 --format={{.State.Status}}
I0605 18:42:28.734005   68137 ssh_runner.go:195] Run: systemctl --version
I0605 18:42:28.734051   68137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-390168
I0605 18:42:28.752145   68137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/functional-390168/id_rsa Username:docker}
I0605 18:42:28.844512   68137 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-390168 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"a9c8288f4ca488e4eb7396d6c9385753d0d3dccc7bbdef47b18aa737fcb1e534","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-390168"],"size":"30"},{"id":"398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.33.1"],"size":"73400000"},{"id":"1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.0"],"size":"70100000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"41550e1487d1256f483a337495f9d90193dfeea3a65e
8ec56d3d06d48c5d3192","repoDigests":[],"repoTags":["localhost/my-image:functional-390168"],"size":"1240000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.33.1"],"size":"102000000"},{"id":"b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.33.1"],"size":"97900000"},{"id":"be69f2940aaf64fdf50c9c99420cbd57e10ee655ec7204df1c407e9af63d0cc1","repoDigests":[],"repoTags":["docker.io/library/nginx
:latest"],"size":"192000000"},{"id":"499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.21-0"],"size":"153000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.33.1"],"size":"94600000"},{"id":"6769dc3a703c719c1d2756bda113659be28ae16cf0da58dd5fd823d6b9a050ea","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48200000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-390168"],"size":"4940000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-390168 image ls --format json --alsologtostderr:
I0605 18:42:28.505923   68089 out.go:345] Setting OutFile to fd 1 ...
I0605 18:42:28.506160   68089 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0605 18:42:28.506168   68089 out.go:358] Setting ErrFile to fd 2...
I0605 18:42:28.506171   68089 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0605 18:42:28.506392   68089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20889-6302/.minikube/bin
I0605 18:42:28.506928   68089 config.go:182] Loaded profile config "functional-390168": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
I0605 18:42:28.507016   68089 config.go:182] Loaded profile config "functional-390168": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
I0605 18:42:28.507424   68089 cli_runner.go:164] Run: docker container inspect functional-390168 --format={{.State.Status}}
I0605 18:42:28.524200   68089 ssh_runner.go:195] Run: systemctl --version
I0605 18:42:28.524248   68089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-390168
I0605 18:42:28.543638   68089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/functional-390168/id_rsa Username:docker}
I0605 18:42:28.635856   68089 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-390168 image ls --format yaml --alsologtostderr:
- id: ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.33.1
size: "94600000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-390168
size: "4940000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: a9c8288f4ca488e4eb7396d6c9385753d0d3dccc7bbdef47b18aa737fcb1e534
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-390168
size: "30"
- id: 6769dc3a703c719c1d2756bda113659be28ae16cf0da58dd5fd823d6b9a050ea
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48200000"
- id: 1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.0
size: "70100000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.33.1
size: "73400000"
- id: c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.33.1
size: "102000000"
- id: b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.33.1
size: "97900000"
- id: 499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.21-0
size: "153000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: be69f2940aaf64fdf50c9c99420cbd57e10ee655ec7204df1c407e9af63d0cc1
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-390168 image ls --format yaml --alsologtostderr:
I0605 18:42:25.109799   66810 out.go:345] Setting OutFile to fd 1 ...
I0605 18:42:25.110015   66810 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0605 18:42:25.110023   66810 out.go:358] Setting ErrFile to fd 2...
I0605 18:42:25.110027   66810 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0605 18:42:25.110192   66810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20889-6302/.minikube/bin
I0605 18:42:25.110753   66810 config.go:182] Loaded profile config "functional-390168": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
I0605 18:42:25.110851   66810 config.go:182] Loaded profile config "functional-390168": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
I0605 18:42:25.111266   66810 cli_runner.go:164] Run: docker container inspect functional-390168 --format={{.State.Status}}
I0605 18:42:25.136059   66810 ssh_runner.go:195] Run: systemctl --version
I0605 18:42:25.136126   66810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-390168
I0605 18:42:25.160857   66810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/functional-390168/id_rsa Username:docker}
I0605 18:42:25.272490   66810 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-390168 ssh pgrep buildkitd: exit status 1 (248.005375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 image build -t localhost/my-image:functional-390168 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-390168 image build -t localhost/my-image:functional-390168 testdata/build --alsologtostderr: (2.591655678s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-390168 image build -t localhost/my-image:functional-390168 testdata/build --alsologtostderr:
I0605 18:42:25.636163   67175 out.go:345] Setting OutFile to fd 1 ...
I0605 18:42:25.636317   67175 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0605 18:42:25.636327   67175 out.go:358] Setting ErrFile to fd 2...
I0605 18:42:25.636331   67175 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0605 18:42:25.636507   67175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20889-6302/.minikube/bin
I0605 18:42:25.637037   67175 config.go:182] Loaded profile config "functional-390168": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
I0605 18:42:25.637561   67175 config.go:182] Loaded profile config "functional-390168": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
I0605 18:42:25.637941   67175 cli_runner.go:164] Run: docker container inspect functional-390168 --format={{.State.Status}}
I0605 18:42:25.656141   67175 ssh_runner.go:195] Run: systemctl --version
I0605 18:42:25.656183   67175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-390168
I0605 18:42:25.673854   67175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/functional-390168/id_rsa Username:docker}
I0605 18:42:25.763758   67175 build_images.go:161] Building image from path: /tmp/build.780418433.tar
I0605 18:42:25.763836   67175 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0605 18:42:25.772707   67175 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.780418433.tar
I0605 18:42:25.776084   67175 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.780418433.tar: stat -c "%s %y" /var/lib/minikube/build/build.780418433.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.780418433.tar': No such file or directory
I0605 18:42:25.776115   67175 ssh_runner.go:362] scp /tmp/build.780418433.tar --> /var/lib/minikube/build/build.780418433.tar (3072 bytes)
I0605 18:42:25.800199   67175 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.780418433
I0605 18:42:25.809327   67175 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.780418433 -xf /var/lib/minikube/build/build.780418433.tar
I0605 18:42:25.819017   67175 docker.go:373] Building image: /var/lib/minikube/build/build.780418433
I0605 18:42:25.819080   67175 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-390168 /var/lib/minikube/build/build.780418433
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:41550e1487d1256f483a337495f9d90193dfeea3a65e8ec56d3d06d48c5d3192 done
#8 naming to localhost/my-image:functional-390168 done
#8 DONE 0.0s
I0605 18:42:28.155400   67175 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-390168 /var/lib/minikube/build/build.780418433: (2.336298726s)
I0605 18:42:28.155473   67175 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.780418433
I0605 18:42:28.167525   67175 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.780418433.tar
I0605 18:42:28.176659   67175 build_images.go:217] Built localhost/my-image:functional-390168 from /tmp/build.780418433.tar
I0605 18:42:28.176687   67175 build_images.go:133] succeeded building to: functional-390168
I0605 18:42:28.176765   67175 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-390168
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:516: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-390168 docker-env) && out/minikube-linux-amd64 status -p functional-390168"
functional_test.go:539: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-390168 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-390168 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-390168 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3b5f74d4-6850-4b96-a735-555cbaf04fa9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3b5f74d4-6850-4b96-a735-555cbaf04fa9] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.002917265s
I0605 18:42:06.041232   13279 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 image load --daemon kicbase/echo-server:functional-390168 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 image load --daemon kicbase/echo-server:functional-390168 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-390168
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 image load --daemon kicbase/echo-server:functional-390168 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 image save kicbase/echo-server:functional-390168 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 image rm kicbase/echo-server:functional-390168 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "399.78377ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "48.142511ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "352.761758ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "55.979461ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-390168
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 image save --daemon kicbase/echo-server:functional-390168 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-390168
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-390168 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.145.250 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-390168 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (14.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-390168 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-390168 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-rcfkx" [b26ee798-abb8-41c0-9068-a346f3137f85] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-rcfkx" [b26ee798-abb8-41c0-9068-a346f3137f85] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 14.003043821s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (14.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-390168 /tmp/TestFunctionalparallelMountCmdany-port3886656388/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1749148940535788609" to /tmp/TestFunctionalparallelMountCmdany-port3886656388/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1749148940535788609" to /tmp/TestFunctionalparallelMountCmdany-port3886656388/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1749148940535788609" to /tmp/TestFunctionalparallelMountCmdany-port3886656388/001/test-1749148940535788609
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-390168 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (268.854617ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0605 18:42:20.804952   13279 retry.go:31] will retry after 397.339213ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun  5 18:42 created-by-test
-rw-r--r-- 1 docker docker 24 Jun  5 18:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun  5 18:42 test-1749148940535788609
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh cat /mount-9p/test-1749148940535788609
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-390168 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [731e23d4-4555-4a90-b026-14fb63da5575] Pending
helpers_test.go:344: "busybox-mount" [731e23d4-4555-4a90-b026-14fb63da5575] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [731e23d4-4555-4a90-b026-14fb63da5575] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [731e23d4-4555-4a90-b026-14fb63da5575] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004204912s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-390168 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-390168 /tmp/TestFunctionalparallelMountCmdany-port3886656388/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 service list -o json
functional_test.go:1511: Took "878.78807ms" to run "out/minikube-linux-amd64 -p functional-390168 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:31931
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:31931
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-390168 /tmp/TestFunctionalparallelMountCmdspecific-port691207005/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-390168 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (312.850433ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0605 18:42:28.453849   13279 retry.go:31] will retry after 549.324604ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-390168 /tmp/TestFunctionalparallelMountCmdspecific-port691207005/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-390168 ssh "sudo umount -f /mount-9p": exit status 1 (283.431492ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-390168 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-390168 /tmp/TestFunctionalparallelMountCmdspecific-port691207005/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-390168 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3066766821/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-390168 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3066766821/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-390168 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3066766821/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-390168 ssh "findmnt -T" /mount1: exit status 1 (319.879334ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0605 18:42:30.360691   13279 retry.go:31] will retry after 484.375887ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-390168 ssh "findmnt -T" /mount3
2025/06/05 18:42:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-390168 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-390168 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3066766821/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-390168 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3066766821/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-390168 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3066766821/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-390168
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-390168
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-390168
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (94.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E0605 18:43:55.399359   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-800222 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m33.992916897s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (94.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (38.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-800222 kubectl -- rollout status deployment/busybox: (2.860615942s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0605 18:44:18.926648   13279 retry.go:31] will retry after 919.789467ms: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0605 18:44:19.959654   13279 retry.go:31] will retry after 1.349705445s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0605 18:44:21.418966   13279 retry.go:31] will retry after 2.655579273s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E0605 18:44:23.105717   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0605 18:44:24.187728   13279 retry.go:31] will retry after 3.697660022s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0605 18:44:27.999243   13279 retry.go:31] will retry after 3.726410036s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0605 18:44:31.847774   13279 retry.go:31] will retry after 8.634432009s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0605 18:44:40.594184   13279 retry.go:31] will retry after 11.478681825s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- exec busybox-58667487b6-7tw8h -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- exec busybox-58667487b6-h2crq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- exec busybox-58667487b6-kjhb8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- exec busybox-58667487b6-7tw8h -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- exec busybox-58667487b6-h2crq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- exec busybox-58667487b6-kjhb8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- exec busybox-58667487b6-7tw8h -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- exec busybox-58667487b6-h2crq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- exec busybox-58667487b6-kjhb8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (38.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- exec busybox-58667487b6-7tw8h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- exec busybox-58667487b6-7tw8h -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- exec busybox-58667487b6-h2crq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- exec busybox-58667487b6-h2crq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- exec busybox-58667487b6-kjhb8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 kubectl -- exec busybox-58667487b6-kjhb8 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (12.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-800222 node add --alsologtostderr -v 5: (12.104596802s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (12.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-800222 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 status --output json --alsologtostderr -v 5
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 cp testdata/cp-test.txt ha-800222:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 cp ha-800222:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1378223460/001/cp-test_ha-800222.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 cp ha-800222:/home/docker/cp-test.txt ha-800222-m02:/home/docker/cp-test_ha-800222_ha-800222-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m02 "sudo cat /home/docker/cp-test_ha-800222_ha-800222-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 cp ha-800222:/home/docker/cp-test.txt ha-800222-m03:/home/docker/cp-test_ha-800222_ha-800222-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m03 "sudo cat /home/docker/cp-test_ha-800222_ha-800222-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 cp ha-800222:/home/docker/cp-test.txt ha-800222-m04:/home/docker/cp-test_ha-800222_ha-800222-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m04 "sudo cat /home/docker/cp-test_ha-800222_ha-800222-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 cp testdata/cp-test.txt ha-800222-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 cp ha-800222-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1378223460/001/cp-test_ha-800222-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 cp ha-800222-m02:/home/docker/cp-test.txt ha-800222:/home/docker/cp-test_ha-800222-m02_ha-800222.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222 "sudo cat /home/docker/cp-test_ha-800222-m02_ha-800222.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 cp ha-800222-m02:/home/docker/cp-test.txt ha-800222-m03:/home/docker/cp-test_ha-800222-m02_ha-800222-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m03 "sudo cat /home/docker/cp-test_ha-800222-m02_ha-800222-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 cp ha-800222-m02:/home/docker/cp-test.txt ha-800222-m04:/home/docker/cp-test_ha-800222-m02_ha-800222-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m04 "sudo cat /home/docker/cp-test_ha-800222-m02_ha-800222-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 cp testdata/cp-test.txt ha-800222-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 cp ha-800222-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1378223460/001/cp-test_ha-800222-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 cp ha-800222-m03:/home/docker/cp-test.txt ha-800222:/home/docker/cp-test_ha-800222-m03_ha-800222.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222 "sudo cat /home/docker/cp-test_ha-800222-m03_ha-800222.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 cp ha-800222-m03:/home/docker/cp-test.txt ha-800222-m02:/home/docker/cp-test_ha-800222-m03_ha-800222-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m02 "sudo cat /home/docker/cp-test_ha-800222-m03_ha-800222-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 cp ha-800222-m03:/home/docker/cp-test.txt ha-800222-m04:/home/docker/cp-test_ha-800222-m03_ha-800222-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m04 "sudo cat /home/docker/cp-test_ha-800222-m03_ha-800222-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 cp testdata/cp-test.txt ha-800222-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 cp ha-800222-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1378223460/001/cp-test_ha-800222-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 cp ha-800222-m04:/home/docker/cp-test.txt ha-800222:/home/docker/cp-test_ha-800222-m04_ha-800222.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222 "sudo cat /home/docker/cp-test_ha-800222-m04_ha-800222.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 cp ha-800222-m04:/home/docker/cp-test.txt ha-800222-m02:/home/docker/cp-test_ha-800222-m04_ha-800222-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m02 "sudo cat /home/docker/cp-test_ha-800222-m04_ha-800222-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 cp ha-800222-m04:/home/docker/cp-test.txt ha-800222-m03:/home/docker/cp-test_ha-800222-m04_ha-800222-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 ssh -n ha-800222-m03 "sudo cat /home/docker/cp-test_ha-800222-m04_ha-800222-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-800222 node stop m02 --alsologtostderr -v 5: (10.766900004s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-800222 status --alsologtostderr -v 5: exit status 7 (657.099595ms)

                                                
                                                
-- stdout --
	ha-800222
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-800222-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-800222-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-800222-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0605 18:45:35.412685   97411 out.go:345] Setting OutFile to fd 1 ...
	I0605 18:45:35.412806   97411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0605 18:45:35.412814   97411 out.go:358] Setting ErrFile to fd 2...
	I0605 18:45:35.412818   97411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0605 18:45:35.413075   97411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20889-6302/.minikube/bin
	I0605 18:45:35.413264   97411 out.go:352] Setting JSON to false
	I0605 18:45:35.413303   97411 mustload.go:65] Loading cluster: ha-800222
	I0605 18:45:35.413399   97411 notify.go:220] Checking for updates...
	I0605 18:45:35.413698   97411 config.go:182] Loaded profile config "ha-800222": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
	I0605 18:45:35.413716   97411 status.go:174] checking status of ha-800222 ...
	I0605 18:45:35.414090   97411 cli_runner.go:164] Run: docker container inspect ha-800222 --format={{.State.Status}}
	I0605 18:45:35.433915   97411 status.go:371] ha-800222 host status = "Running" (err=<nil>)
	I0605 18:45:35.433946   97411 host.go:66] Checking if "ha-800222" exists ...
	I0605 18:45:35.434191   97411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-800222
	I0605 18:45:35.452786   97411 host.go:66] Checking if "ha-800222" exists ...
	I0605 18:45:35.453060   97411 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0605 18:45:35.453111   97411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-800222
	I0605 18:45:35.472479   97411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/ha-800222/id_rsa Username:docker}
	I0605 18:45:35.564159   97411 ssh_runner.go:195] Run: systemctl --version
	I0605 18:45:35.568689   97411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0605 18:45:35.579383   97411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:45:35.628290   97411 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:73 SystemTime:2025-06-05 18:45:35.618942783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0605 18:45:35.628917   97411 kubeconfig.go:125] found "ha-800222" server: "https://192.168.49.254:8443"
	I0605 18:45:35.628953   97411 api_server.go:166] Checking apiserver status ...
	I0605 18:45:35.628998   97411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0605 18:45:35.640881   97411 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2620/cgroup
	I0605 18:45:35.650140   97411 api_server.go:182] apiserver freezer: "9:freezer:/docker/7e333a770e85a82fd04210f117cf3dca0297551d86872a0fed5ecd027de3f347/kubepods/burstable/pod299df7de8df98df6d0a820a2b0737b64/2e69d751d0f34994958b3ee04762768907fd112f436df7784b9d8a8450726929"
	I0605 18:45:35.650187   97411 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7e333a770e85a82fd04210f117cf3dca0297551d86872a0fed5ecd027de3f347/kubepods/burstable/pod299df7de8df98df6d0a820a2b0737b64/2e69d751d0f34994958b3ee04762768907fd112f436df7784b9d8a8450726929/freezer.state
	I0605 18:45:35.658132   97411 api_server.go:204] freezer state: "THAWED"
	I0605 18:45:35.658157   97411 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0605 18:45:35.662177   97411 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0605 18:45:35.662198   97411 status.go:463] ha-800222 apiserver status = Running (err=<nil>)
	I0605 18:45:35.662220   97411 status.go:176] ha-800222 status: &{Name:ha-800222 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0605 18:45:35.662237   97411 status.go:174] checking status of ha-800222-m02 ...
	I0605 18:45:35.662479   97411 cli_runner.go:164] Run: docker container inspect ha-800222-m02 --format={{.State.Status}}
	I0605 18:45:35.680218   97411 status.go:371] ha-800222-m02 host status = "Stopped" (err=<nil>)
	I0605 18:45:35.680247   97411 status.go:384] host is not running, skipping remaining checks
	I0605 18:45:35.680256   97411 status.go:176] ha-800222-m02 status: &{Name:ha-800222-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0605 18:45:35.680278   97411 status.go:174] checking status of ha-800222-m03 ...
	I0605 18:45:35.680586   97411 cli_runner.go:164] Run: docker container inspect ha-800222-m03 --format={{.State.Status}}
	I0605 18:45:35.697867   97411 status.go:371] ha-800222-m03 host status = "Running" (err=<nil>)
	I0605 18:45:35.697891   97411 host.go:66] Checking if "ha-800222-m03" exists ...
	I0605 18:45:35.698171   97411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-800222-m03
	I0605 18:45:35.715100   97411 host.go:66] Checking if "ha-800222-m03" exists ...
	I0605 18:45:35.715380   97411 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0605 18:45:35.715422   97411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-800222-m03
	I0605 18:45:35.732847   97411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/ha-800222-m03/id_rsa Username:docker}
	I0605 18:45:35.824498   97411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0605 18:45:35.835746   97411 kubeconfig.go:125] found "ha-800222" server: "https://192.168.49.254:8443"
	I0605 18:45:35.835775   97411 api_server.go:166] Checking apiserver status ...
	I0605 18:45:35.835805   97411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0605 18:45:35.846241   97411 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2537/cgroup
	I0605 18:45:35.855001   97411 api_server.go:182] apiserver freezer: "9:freezer:/docker/bfc1f3415f75480573b83bfa2e594e0a075ecc545883d64f3c48a80aacfa808d/kubepods/burstable/podf80a6cb66de5d76930f94c9abf33a81b/4442fa8ffd8bb3fd490f91b6552f1ff5efb9f7eeb9c49cffb66ce24f85927744"
	I0605 18:45:35.855071   97411 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bfc1f3415f75480573b83bfa2e594e0a075ecc545883d64f3c48a80aacfa808d/kubepods/burstable/podf80a6cb66de5d76930f94c9abf33a81b/4442fa8ffd8bb3fd490f91b6552f1ff5efb9f7eeb9c49cffb66ce24f85927744/freezer.state
	I0605 18:45:35.862464   97411 api_server.go:204] freezer state: "THAWED"
	I0605 18:45:35.862490   97411 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0605 18:45:35.866504   97411 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0605 18:45:35.866527   97411 status.go:463] ha-800222-m03 apiserver status = Running (err=<nil>)
	I0605 18:45:35.866538   97411 status.go:176] ha-800222-m03 status: &{Name:ha-800222-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0605 18:45:35.866574   97411 status.go:174] checking status of ha-800222-m04 ...
	I0605 18:45:35.866864   97411 cli_runner.go:164] Run: docker container inspect ha-800222-m04 --format={{.State.Status}}
	I0605 18:45:35.884924   97411 status.go:371] ha-800222-m04 host status = "Running" (err=<nil>)
	I0605 18:45:35.884950   97411 host.go:66] Checking if "ha-800222-m04" exists ...
	I0605 18:45:35.885232   97411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-800222-m04
	I0605 18:45:35.901863   97411 host.go:66] Checking if "ha-800222-m04" exists ...
	I0605 18:45:35.902104   97411 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0605 18:45:35.902140   97411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-800222-m04
	I0605 18:45:35.920084   97411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/ha-800222-m04/id_rsa Username:docker}
	I0605 18:45:36.012383   97411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0605 18:45:36.022677   97411 status.go:176] ha-800222-m04 status: &{Name:ha-800222-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (39.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-800222 node start m02 --alsologtostderr -v 5: (38.042943518s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (39.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (175.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-800222 stop --alsologtostderr -v 5: (33.145828933s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 start --wait true --alsologtostderr -v 5
E0605 18:46:56.761437   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:46:56.767888   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:46:56.779530   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:46:56.801186   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:46:56.843253   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:46:56.925095   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:46:57.086953   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:46:57.408513   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:46:58.050797   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:46:59.332104   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:47:01.894245   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:47:07.015877   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:47:17.257560   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:47:37.739493   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:48:18.701707   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 18:48:55.397147   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-800222 start --wait true --alsologtostderr -v 5: (2m22.368908327s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (175.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-800222 node delete m03 --alsologtostderr -v 5: (8.500171047s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 stop --alsologtostderr -v 5
E0605 18:49:40.623371   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-800222 stop --alsologtostderr -v 5: (31.961318327s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-800222 status --alsologtostderr -v 5: exit status 7 (104.298954ms)

                                                
                                                
-- stdout --
	ha-800222
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-800222-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-800222-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0605 18:49:54.119708  129274 out.go:345] Setting OutFile to fd 1 ...
	I0605 18:49:54.120126  129274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0605 18:49:54.120139  129274 out.go:358] Setting ErrFile to fd 2...
	I0605 18:49:54.120145  129274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0605 18:49:54.120612  129274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20889-6302/.minikube/bin
	I0605 18:49:54.120867  129274 out.go:352] Setting JSON to false
	I0605 18:49:54.120908  129274 mustload.go:65] Loading cluster: ha-800222
	I0605 18:49:54.121007  129274 notify.go:220] Checking for updates...
	I0605 18:49:54.121351  129274 config.go:182] Loaded profile config "ha-800222": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
	I0605 18:49:54.121373  129274 status.go:174] checking status of ha-800222 ...
	I0605 18:49:54.121772  129274 cli_runner.go:164] Run: docker container inspect ha-800222 --format={{.State.Status}}
	I0605 18:49:54.143790  129274 status.go:371] ha-800222 host status = "Stopped" (err=<nil>)
	I0605 18:49:54.143814  129274 status.go:384] host is not running, skipping remaining checks
	I0605 18:49:54.143822  129274 status.go:176] ha-800222 status: &{Name:ha-800222 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0605 18:49:54.143853  129274 status.go:174] checking status of ha-800222-m02 ...
	I0605 18:49:54.144126  129274 cli_runner.go:164] Run: docker container inspect ha-800222-m02 --format={{.State.Status}}
	I0605 18:49:54.161021  129274 status.go:371] ha-800222-m02 host status = "Stopped" (err=<nil>)
	I0605 18:49:54.161047  129274 status.go:384] host is not running, skipping remaining checks
	I0605 18:49:54.161056  129274 status.go:176] ha-800222-m02 status: &{Name:ha-800222-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0605 18:49:54.161081  129274 status.go:174] checking status of ha-800222-m04 ...
	I0605 18:49:54.161321  129274 cli_runner.go:164] Run: docker container inspect ha-800222-m04 --format={{.State.Status}}
	I0605 18:49:54.178099  129274 status.go:371] ha-800222-m04 host status = "Stopped" (err=<nil>)
	I0605 18:49:54.178122  129274 status.go:384] host is not running, skipping remaining checks
	I0605 18:49:54.178130  129274 status.go:176] ha-800222-m04 status: &{Name:ha-800222-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (64.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-800222 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m3.912803149s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (64.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (29.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-800222 node add --control-plane --alsologtostderr -v 5: (28.22059143s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-800222 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (29.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (26.16s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-564761 --driver=docker  --container-runtime=docker
E0605 18:51:56.761875   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-564761 --driver=docker  --container-runtime=docker: (26.164234586s)
--- PASS: TestImageBuild/serial/Setup (26.16s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (0.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-564761
--- PASS: TestImageBuild/serial/NormalBuild (0.97s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.64s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-564761
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.64s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.44s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-564761
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.44s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.48s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-564761
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.48s)

                                                
                                    
x
+
TestJSONOutput/start/Command (36.14s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-763667 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
E0605 18:52:24.466212   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-763667 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (36.144056053s)
--- PASS: TestJSONOutput/start/Command (36.14s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.49s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-763667 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.49s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-763667 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-763667 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-763667 --output=json --user=testUser: (10.777194291s)
--- PASS: TestJSONOutput/stop/Command (10.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-940732 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-940732 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (64.373464ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7b763d9e-b7dc-4191-a347-81d9cf0ab01e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-940732] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2022cfd1-796d-4e71-b82a-5eac8a807888","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20889"}}
	{"specversion":"1.0","id":"3bd9c915-4262-4db3-a90f-5032b0c60e83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cc622fc3-2d5d-46c2-83a7-3aafc0b6f793","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20889-6302/kubeconfig"}}
	{"specversion":"1.0","id":"6fdbd1f2-d77c-40c7-be05-d86c1e95657e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20889-6302/.minikube"}}
	{"specversion":"1.0","id":"b44556a0-032b-4cd3-a189-eab88e04772f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ff285388-e318-40b7-bf78-5c0562672a38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8d796195-d259-403b-bb3d-a65d7a24f6f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-940732" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-940732
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.35s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-264338 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-264338 --network=: (26.306002777s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-264338" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-264338
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-264338: (2.02851115s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.35s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.72s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-698743 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-698743 --network=bridge: (23.776989804s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-698743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-698743
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-698743: (1.923447795s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.72s)

                                                
                                    
x
+
TestKicExistingNetwork (22.57s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0605 18:53:51.760186   13279 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0605 18:53:51.776488   13279 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0605 18:53:51.776564   13279 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0605 18:53:51.776585   13279 cli_runner.go:164] Run: docker network inspect existing-network
W0605 18:53:51.793929   13279 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0605 18:53:51.793962   13279 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0605 18:53:51.793986   13279 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0605 18:53:51.794131   13279 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0605 18:53:51.810610   13279 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8b59ba5186c4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:7d:47:f9:c0:d8} reservation:<nil>}
I0605 18:53:51.811128   13279 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f1b660}
I0605 18:53:51.811185   13279 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0605 18:53:51.811236   13279 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0605 18:53:51.859766   13279 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-786553 --network=existing-network
E0605 18:53:55.399281   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-786553 --network=existing-network: (20.559135911s)
helpers_test.go:175: Cleaning up "existing-network-786553" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-786553
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-786553: (1.884562826s)
I0605 18:54:14.319750   13279 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (22.57s)

                                                
                                    
x
+
TestKicCustomSubnet (24.87s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-863169 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-863169 --subnet=192.168.60.0/24: (22.774523332s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-863169 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-863169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-863169
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-863169: (2.081098397s)
--- PASS: TestKicCustomSubnet (24.87s)

                                                
                                    
x
+
TestKicStaticIP (24.79s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-694494 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-694494 --static-ip=192.168.200.200: (22.608523869s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-694494 ip
helpers_test.go:175: Cleaning up "static-ip-694494" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-694494
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-694494: (2.061160038s)
--- PASS: TestKicStaticIP (24.79s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (54.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-629723 --driver=docker  --container-runtime=docker
E0605 18:55:18.467321   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-629723 --driver=docker  --container-runtime=docker: (25.654076856s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-648109 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-648109 --driver=docker  --container-runtime=docker: (23.4232696s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-629723
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-648109
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-648109" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-648109
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-648109: (2.067879935s)
helpers_test.go:175: Cleaning up "first-629723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-629723
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-629723: (2.073037576s)
--- PASS: TestMinikubeProfile (54.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-668178 --memory=3072 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-668178 --memory=3072 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.631382595s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-668178 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-682898 --memory=3072 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-682898 --memory=3072 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.654842761s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-682898 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.45s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-668178 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-668178 --alsologtostderr -v=5: (1.450846483s)
--- PASS: TestMountStart/serial/DeleteFirst (1.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-682898 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-682898
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-682898: (1.171672131s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.61s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-682898
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-682898: (6.612015203s)
--- PASS: TestMountStart/serial/RestartStopped (7.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-682898 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (45.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-062310 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E0605 18:56:56.762248   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-062310 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (45.19938799s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (45.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (36.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-062310 -- rollout status deployment/busybox: (2.489222359s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0605 18:57:12.852818   13279 retry.go:31] will retry after 711.76692ms: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0605 18:57:13.677246   13279 retry.go:31] will retry after 1.031036178s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0605 18:57:14.818169   13279 retry.go:31] will retry after 3.144814544s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0605 18:57:18.070960   13279 retry.go:31] will retry after 3.979844314s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0605 18:57:22.164257   13279 retry.go:31] will retry after 4.5399371s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0605 18:57:26.816491   13279 retry.go:31] will retry after 4.440947171s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0605 18:57:31.369369   13279 retry.go:31] will retry after 13.679667846s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- exec busybox-58667487b6-5zqnq -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- exec busybox-58667487b6-s58j4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- exec busybox-58667487b6-5zqnq -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- exec busybox-58667487b6-s58j4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- exec busybox-58667487b6-5zqnq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- exec busybox-58667487b6-s58j4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (36.56s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- exec busybox-58667487b6-5zqnq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- exec busybox-58667487b6-5zqnq -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- exec busybox-58667487b6-s58j4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-062310 -- exec busybox-58667487b6-s58j4 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (12.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-062310 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-062310 -v=5 --alsologtostderr: (12.175837763s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (12.81s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-062310 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 cp testdata/cp-test.txt multinode-062310:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 ssh -n multinode-062310 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 cp multinode-062310:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile913815363/001/cp-test_multinode-062310.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 ssh -n multinode-062310 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 cp multinode-062310:/home/docker/cp-test.txt multinode-062310-m02:/home/docker/cp-test_multinode-062310_multinode-062310-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 ssh -n multinode-062310 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 ssh -n multinode-062310-m02 "sudo cat /home/docker/cp-test_multinode-062310_multinode-062310-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 cp multinode-062310:/home/docker/cp-test.txt multinode-062310-m03:/home/docker/cp-test_multinode-062310_multinode-062310-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 ssh -n multinode-062310 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 ssh -n multinode-062310-m03 "sudo cat /home/docker/cp-test_multinode-062310_multinode-062310-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 cp testdata/cp-test.txt multinode-062310-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 ssh -n multinode-062310-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 cp multinode-062310-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile913815363/001/cp-test_multinode-062310-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 ssh -n multinode-062310-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 cp multinode-062310-m02:/home/docker/cp-test.txt multinode-062310:/home/docker/cp-test_multinode-062310-m02_multinode-062310.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 ssh -n multinode-062310-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 ssh -n multinode-062310 "sudo cat /home/docker/cp-test_multinode-062310-m02_multinode-062310.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 cp multinode-062310-m02:/home/docker/cp-test.txt multinode-062310-m03:/home/docker/cp-test_multinode-062310-m02_multinode-062310-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 ssh -n multinode-062310-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 ssh -n multinode-062310-m03 "sudo cat /home/docker/cp-test_multinode-062310-m02_multinode-062310-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 cp testdata/cp-test.txt multinode-062310-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 ssh -n multinode-062310-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 cp multinode-062310-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile913815363/001/cp-test_multinode-062310-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 ssh -n multinode-062310-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 cp multinode-062310-m03:/home/docker/cp-test.txt multinode-062310:/home/docker/cp-test_multinode-062310-m03_multinode-062310.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 ssh -n multinode-062310-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 ssh -n multinode-062310 "sudo cat /home/docker/cp-test_multinode-062310-m03_multinode-062310.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 cp multinode-062310-m03:/home/docker/cp-test.txt multinode-062310-m02:/home/docker/cp-test_multinode-062310-m03_multinode-062310-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 ssh -n multinode-062310-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 ssh -n multinode-062310-m02 "sudo cat /home/docker/cp-test_multinode-062310-m03_multinode-062310-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-062310 node stop m03: (1.172510605s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-062310 status: exit status 7 (455.383279ms)

                                                
                                                
-- stdout --
	multinode-062310
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-062310-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-062310-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-062310 status --alsologtostderr: exit status 7 (455.94715ms)

                                                
                                                
-- stdout --
	multinode-062310
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-062310-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-062310-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0605 18:58:11.553216  217811 out.go:345] Setting OutFile to fd 1 ...
	I0605 18:58:11.553310  217811 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0605 18:58:11.553316  217811 out.go:358] Setting ErrFile to fd 2...
	I0605 18:58:11.553323  217811 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0605 18:58:11.553541  217811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20889-6302/.minikube/bin
	I0605 18:58:11.553747  217811 out.go:352] Setting JSON to false
	I0605 18:58:11.553778  217811 mustload.go:65] Loading cluster: multinode-062310
	I0605 18:58:11.553937  217811 notify.go:220] Checking for updates...
	I0605 18:58:11.554234  217811 config.go:182] Loaded profile config "multinode-062310": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
	I0605 18:58:11.554256  217811 status.go:174] checking status of multinode-062310 ...
	I0605 18:58:11.554718  217811 cli_runner.go:164] Run: docker container inspect multinode-062310 --format={{.State.Status}}
	I0605 18:58:11.573090  217811 status.go:371] multinode-062310 host status = "Running" (err=<nil>)
	I0605 18:58:11.573119  217811 host.go:66] Checking if "multinode-062310" exists ...
	I0605 18:58:11.573386  217811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-062310
	I0605 18:58:11.590494  217811 host.go:66] Checking if "multinode-062310" exists ...
	I0605 18:58:11.590760  217811 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0605 18:58:11.590797  217811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-062310
	I0605 18:58:11.610349  217811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32910 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/multinode-062310/id_rsa Username:docker}
	I0605 18:58:11.700473  217811 ssh_runner.go:195] Run: systemctl --version
	I0605 18:58:11.704908  217811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0605 18:58:11.715740  217811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:58:11.765801  217811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-06-05 18:58:11.756799983 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0605 18:58:11.766313  217811 kubeconfig.go:125] found "multinode-062310" server: "https://192.168.67.2:8443"
	I0605 18:58:11.766343  217811 api_server.go:166] Checking apiserver status ...
	I0605 18:58:11.766372  217811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0605 18:58:11.777111  217811 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2522/cgroup
	I0605 18:58:11.785372  217811 api_server.go:182] apiserver freezer: "9:freezer:/docker/778d2154ce4b565335a2ee71cbbbcc535eefb4599a2ef520a378ad86c7c8bfe9/kubepods/burstable/podcd074aff3268a615c782f8986b90dbcc/c7b164c34527981b6d6ff6cd2b7fe6e6848553f65de5354a9ac07cd0939dc2e9"
	I0605 18:58:11.785440  217811 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/778d2154ce4b565335a2ee71cbbbcc535eefb4599a2ef520a378ad86c7c8bfe9/kubepods/burstable/podcd074aff3268a615c782f8986b90dbcc/c7b164c34527981b6d6ff6cd2b7fe6e6848553f65de5354a9ac07cd0939dc2e9/freezer.state
	I0605 18:58:11.793302  217811 api_server.go:204] freezer state: "THAWED"
	I0605 18:58:11.793342  217811 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0605 18:58:11.797582  217811 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0605 18:58:11.797608  217811 status.go:463] multinode-062310 apiserver status = Running (err=<nil>)
	I0605 18:58:11.797619  217811 status.go:176] multinode-062310 status: &{Name:multinode-062310 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0605 18:58:11.797641  217811 status.go:174] checking status of multinode-062310-m02 ...
	I0605 18:58:11.797877  217811 cli_runner.go:164] Run: docker container inspect multinode-062310-m02 --format={{.State.Status}}
	I0605 18:58:11.815534  217811 status.go:371] multinode-062310-m02 host status = "Running" (err=<nil>)
	I0605 18:58:11.815561  217811 host.go:66] Checking if "multinode-062310-m02" exists ...
	I0605 18:58:11.815796  217811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-062310-m02
	I0605 18:58:11.832378  217811 host.go:66] Checking if "multinode-062310-m02" exists ...
	I0605 18:58:11.832609  217811 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0605 18:58:11.832649  217811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-062310-m02
	I0605 18:58:11.849084  217811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32915 SSHKeyPath:/home/jenkins/minikube-integration/20889-6302/.minikube/machines/multinode-062310-m02/id_rsa Username:docker}
	I0605 18:58:11.936085  217811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0605 18:58:11.946301  217811 status.go:176] multinode-062310-m02 status: &{Name:multinode-062310-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0605 18:58:11.946336  217811 status.go:174] checking status of multinode-062310-m03 ...
	I0605 18:58:11.946610  217811 cli_runner.go:164] Run: docker container inspect multinode-062310-m03 --format={{.State.Status}}
	I0605 18:58:11.964094  217811 status.go:371] multinode-062310-m03 host status = "Stopped" (err=<nil>)
	I0605 18:58:11.964121  217811 status.go:384] host is not running, skipping remaining checks
	I0605 18:58:11.964129  217811 status.go:176] multinode-062310-m03 status: &{Name:multinode-062310-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.08s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-062310 node start m03 -v=5 --alsologtostderr: (7.258047953s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-062310
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-062310
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-062310: (22.351843469s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-062310 --wait=true -v=5 --alsologtostderr
E0605 18:58:55.397103   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-062310 --wait=true -v=5 --alsologtostderr: (50.940001479s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-062310
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-062310 node delete m03: (4.555125595s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-062310 stop: (21.292694566s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-062310 status: exit status 7 (81.034227ms)

                                                
                                                
-- stdout --
	multinode-062310
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-062310-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-062310 status --alsologtostderr: exit status 7 (87.720109ms)

                                                
                                                
-- stdout --
	multinode-062310
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-062310-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0605 18:59:59.790648  233653 out.go:345] Setting OutFile to fd 1 ...
	I0605 18:59:59.790744  233653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0605 18:59:59.790756  233653 out.go:358] Setting ErrFile to fd 2...
	I0605 18:59:59.790762  233653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0605 18:59:59.790980  233653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20889-6302/.minikube/bin
	I0605 18:59:59.791197  233653 out.go:352] Setting JSON to false
	I0605 18:59:59.791230  233653 mustload.go:65] Loading cluster: multinode-062310
	I0605 18:59:59.791398  233653 notify.go:220] Checking for updates...
	I0605 18:59:59.791723  233653 config.go:182] Loaded profile config "multinode-062310": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
	I0605 18:59:59.791744  233653 status.go:174] checking status of multinode-062310 ...
	I0605 18:59:59.792185  233653 cli_runner.go:164] Run: docker container inspect multinode-062310 --format={{.State.Status}}
	I0605 18:59:59.811396  233653 status.go:371] multinode-062310 host status = "Stopped" (err=<nil>)
	I0605 18:59:59.811439  233653 status.go:384] host is not running, skipping remaining checks
	I0605 18:59:59.811447  233653 status.go:176] multinode-062310 status: &{Name:multinode-062310 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0605 18:59:59.811510  233653 status.go:174] checking status of multinode-062310-m02 ...
	I0605 18:59:59.811856  233653 cli_runner.go:164] Run: docker container inspect multinode-062310-m02 --format={{.State.Status}}
	I0605 18:59:59.829290  233653 status.go:371] multinode-062310-m02 host status = "Stopped" (err=<nil>)
	I0605 18:59:59.829319  233653 status.go:384] host is not running, skipping remaining checks
	I0605 18:59:59.829327  233653 status.go:176] multinode-062310-m02 status: &{Name:multinode-062310-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-062310 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-062310 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (53.489813533s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-062310 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-062310
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-062310-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-062310-m02 --driver=docker  --container-runtime=docker: exit status 14 (68.259649ms)

                                                
                                                
-- stdout --
	* [multinode-062310-m02] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20889
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20889-6302/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20889-6302/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-062310-m02' is duplicated with machine name 'multinode-062310-m02' in profile 'multinode-062310'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-062310-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-062310-m03 --driver=docker  --container-runtime=docker: (22.436117936s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-062310
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-062310: exit status 80 (266.864922ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-062310 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-062310-m03 already exists in multinode-062310-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-062310-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-062310-m03: (2.057833585s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.87s)

                                                
                                    
x
+
TestPreload (98.37s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-265565 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0605 19:01:56.761364   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-265565 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (53.290031224s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-265565 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-265565 image pull gcr.io/k8s-minikube/busybox: (1.477672176s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-265565
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-265565: (10.673555711s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-265565 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-265565 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (30.581816755s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-265565 image list
helpers_test.go:175: Cleaning up "test-preload-265565" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-265565
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-265565: (2.145730909s)
--- PASS: TestPreload (98.37s)

                                                
                                    
x
+
TestScheduledStopUnix (98.99s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-443376 --memory=3072 --driver=docker  --container-runtime=docker
E0605 19:03:19.828396   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-443376 --memory=3072 --driver=docker  --container-runtime=docker: (26.091880014s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-443376 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-443376 -n scheduled-stop-443376
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-443376 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0605 19:03:27.396788   13279 retry.go:31] will retry after 133.191µs: open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/scheduled-stop-443376/pid: no such file or directory
I0605 19:03:27.397975   13279 retry.go:31] will retry after 222.743µs: open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/scheduled-stop-443376/pid: no such file or directory
I0605 19:03:27.399175   13279 retry.go:31] will retry after 281.848µs: open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/scheduled-stop-443376/pid: no such file or directory
I0605 19:03:27.400319   13279 retry.go:31] will retry after 472.174µs: open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/scheduled-stop-443376/pid: no such file or directory
I0605 19:03:27.401464   13279 retry.go:31] will retry after 390.871µs: open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/scheduled-stop-443376/pid: no such file or directory
I0605 19:03:27.402589   13279 retry.go:31] will retry after 641.061µs: open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/scheduled-stop-443376/pid: no such file or directory
I0605 19:03:27.403743   13279 retry.go:31] will retry after 1.416301ms: open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/scheduled-stop-443376/pid: no such file or directory
I0605 19:03:27.405950   13279 retry.go:31] will retry after 1.905476ms: open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/scheduled-stop-443376/pid: no such file or directory
I0605 19:03:27.408163   13279 retry.go:31] will retry after 3.199077ms: open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/scheduled-stop-443376/pid: no such file or directory
I0605 19:03:27.412363   13279 retry.go:31] will retry after 3.661998ms: open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/scheduled-stop-443376/pid: no such file or directory
I0605 19:03:27.416611   13279 retry.go:31] will retry after 4.322554ms: open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/scheduled-stop-443376/pid: no such file or directory
I0605 19:03:27.421819   13279 retry.go:31] will retry after 6.54091ms: open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/scheduled-stop-443376/pid: no such file or directory
I0605 19:03:27.429095   13279 retry.go:31] will retry after 9.625751ms: open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/scheduled-stop-443376/pid: no such file or directory
I0605 19:03:27.439329   13279 retry.go:31] will retry after 29.044631ms: open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/scheduled-stop-443376/pid: no such file or directory
I0605 19:03:27.468518   13279 retry.go:31] will retry after 19.097657ms: open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/scheduled-stop-443376/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-443376 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-443376 -n scheduled-stop-443376
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-443376
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-443376 --schedule 15s
E0605 19:03:55.397394   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-443376
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-443376: exit status 7 (62.865567ms)

                                                
                                                
-- stdout --
	scheduled-stop-443376
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-443376 -n scheduled-stop-443376
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-443376 -n scheduled-stop-443376: exit status 7 (65.887914ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-443376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-443376
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-443376: (1.607963191s)
--- PASS: TestScheduledStopUnix (98.99s)

                                                
                                    
x
+
TestSkaffold (99.5s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2820026097 version
skaffold_test.go:63: skaffold version: v2.16.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-003512 --memory=3072 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-003512 --memory=3072 --driver=docker  --container-runtime=docker: (24.295423478s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2820026097 run --minikube-profile skaffold-003512 --kube-context skaffold-003512 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2820026097 run --minikube-profile skaffold-003512 --kube-context skaffold-003512 --status-check=true --port-forward=false --interactive=false: (1m0.429898025s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-67676b5885-ljdtw" [cd6d278f-c81e-4097-ad7c-d04727bb689c] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.00369989s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-cfcf6bc85-cnbl4" [53c4398a-7e5c-450f-b05c-2efe6cb9ae95] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003505646s
helpers_test.go:175: Cleaning up "skaffold-003512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-003512
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-003512: (2.82655246s)
--- PASS: TestSkaffold (99.50s)

                                                
                                    
x
+
TestInsufficientStorage (9.93s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-355180 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-355180 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (7.808861109s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2867b2a2-fa66-4031-b67e-ea1f6088d07e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-355180] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8415080b-0fff-4949-9c39-d8fa69f14dc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20889"}}
	{"specversion":"1.0","id":"398b96b4-e4ea-4b31-8250-de4330ed72cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"44d33b6f-4cad-47ac-8b14-d842e4f79048","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20889-6302/kubeconfig"}}
	{"specversion":"1.0","id":"acddef59-8260-4c98-8871-174023120c46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20889-6302/.minikube"}}
	{"specversion":"1.0","id":"e875d8dc-aa0c-48d4-96f8-7efed98cf56c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"db8e282c-d320-4c8b-ba1c-8802e65a0771","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"15c74da9-7e3e-48f6-a0a5-55f9657d7992","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"262444d8-c5ac-4e00-a9f5-36a5c66f60f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e41f4106-403e-4f8e-bc3b-8c03886fc321","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1d5bb685-3967-410a-8312-b77a73927249","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d2806b7c-0e53-45e9-9165-867fcbfd6a6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-355180\" primary control-plane node in \"insufficient-storage-355180\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"643830fe-6a35-4bc0-b621-91c8cbae963d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.47 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"cbe05b4f-d1cc-477b-adde-61d6d6fa7f6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"27021fc2-f96e-4c70-961a-8dc52b899ae6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-355180 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-355180 --output=json --layout=cluster: exit status 7 (257.28513ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-355180","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-355180","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0605 19:06:27.448626  274983 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-355180" does not appear in /home/jenkins/minikube-integration/20889-6302/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-355180 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-355180 --output=json --layout=cluster: exit status 7 (256.82822ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-355180","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-355180","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0605 19:06:27.706543  275081 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-355180" does not appear in /home/jenkins/minikube-integration/20889-6302/kubeconfig
	E0605 19:06:27.716185  275081 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/insufficient-storage-355180/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-355180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-355180
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-355180: (1.609760685s)
--- PASS: TestInsufficientStorage (9.93s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (61.9s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2393656992 start -p running-upgrade-890065 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2393656992 start -p running-upgrade-890065 --memory=3072 --vm-driver=docker  --container-runtime=docker: (30.267864735s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-890065 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-890065 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (28.877486578s)
helpers_test.go:175: Cleaning up "running-upgrade-890065" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-890065
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-890065: (2.139388057s)
--- PASS: TestRunningBinaryUpgrade (61.90s)

                                                
                                    
x
+
TestKubernetesUpgrade (338.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-191768 --memory=3072 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-191768 --memory=3072 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (48.680561936s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-191768
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-191768: (1.208410013s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-191768 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-191768 status --format={{.Host}}: exit status 7 (80.210274ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-191768 --memory=3072 --kubernetes-version=v1.33.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-191768 --memory=3072 --kubernetes-version=v1.33.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m27.158375352s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-191768 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-191768 --memory=3072 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
E0605 19:11:46.521641   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/skaffold-003512/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-191768 --memory=3072 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (64.947948ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-191768] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20889
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20889-6302/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20889-6302/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.33.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-191768
	    minikube start -p kubernetes-upgrade-191768 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1917682 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.33.1, by running:
	    
	    minikube start -p kubernetes-upgrade-191768 --kubernetes-version=v1.33.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-191768 --memory=3072 --kubernetes-version=v1.33.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0605 19:11:56.761467   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:11:58.469490   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-191768 --memory=3072 --kubernetes-version=v1.33.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (19.039008767s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-191768" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-191768
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-191768: (2.320841584s)
--- PASS: TestKubernetesUpgrade (338.61s)

                                                
                                    
x
+
TestMissingContainerUpgrade (132.77s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.791916546 start -p missing-upgrade-399321 --memory=3072 --driver=docker  --container-runtime=docker
E0605 19:06:56.761564   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.791916546 start -p missing-upgrade-399321 --memory=3072 --driver=docker  --container-runtime=docker: (1m9.944349998s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-399321
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-399321: (10.455458621s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-399321
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-399321 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-399321 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (49.76048912s)
helpers_test.go:175: Cleaning up "missing-upgrade-399321" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-399321
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-399321: (2.130321294s)
--- PASS: TestMissingContainerUpgrade (132.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-297783 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-297783 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (77.024133ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-297783] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20889
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20889-6302/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20889-6302/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-297783 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-297783 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (36.002658838s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-297783 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-297783 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-297783 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (14.139331714s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-297783 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-297783 status -o json: exit status 2 (321.120232ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-297783","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-297783
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-297783: (1.725466119s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-297783 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-297783 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (9.208428333s)
--- PASS: TestNoKubernetes/serial/Start (9.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-297783 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-297783 "sudo systemctl is-active --quiet service kubelet": exit status 1 (312.008203ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (20.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (4.82318743s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (15.946131116s)
--- PASS: TestNoKubernetes/serial/ProfileList (20.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-297783
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-297783: (2.983442379s)
--- PASS: TestNoKubernetes/serial/Stop (2.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-297783 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-297783 --driver=docker  --container-runtime=docker: (7.03286148s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-297783 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-297783 "sudo systemctl is-active --quiet service kubelet": exit status 1 (244.201355ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (72.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1163243628 start -p stopped-upgrade-554991 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1163243628 start -p stopped-upgrade-554991 --memory=3072 --vm-driver=docker  --container-runtime=docker: (33.136220833s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1163243628 -p stopped-upgrade-554991 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1163243628 -p stopped-upgrade-554991 stop: (10.894112229s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-554991 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-554991 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (28.259085935s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (72.29s)

                                                
                                    
x
+
TestPause/serial/Start (73.32s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-609917 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-609917 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m13.321069286s)
--- PASS: TestPause/serial/Start (73.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-554991
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-554991: (1.318611198s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (34.31s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-609917 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-609917 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.28940472s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (34.31s)

                                                
                                    
x
+
TestPause/serial/Pause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-609917 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.61s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-609917 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-609917 --output=json --layout=cluster: exit status 2 (354.67393ms)

                                                
                                                
-- stdout --
	{"Name":"pause-609917","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-609917","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.35s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.46s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-609917 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.46s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.61s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-609917 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.61s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-609917 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-609917 --alsologtostderr -v=5: (2.118184005s)
--- PASS: TestPause/serial/DeletePaused (2.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.357124974s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-609917
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-609917: exit status 1 (18.566976ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-609917: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (15.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (102.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-753965 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-753965 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (1m42.773988996s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (102.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (79.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-332733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.1
E0605 19:11:05.544811   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/skaffold-003512/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:11:05.551259   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/skaffold-003512/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:11:05.562673   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/skaffold-003512/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:11:05.584074   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/skaffold-003512/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:11:05.625503   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/skaffold-003512/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:11:05.706989   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/skaffold-003512/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:11:05.868803   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/skaffold-003512/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:11:06.190327   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/skaffold-003512/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:11:06.832317   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/skaffold-003512/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:11:08.113976   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/skaffold-003512/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:11:10.676000   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/skaffold-003512/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:11:15.797901   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/skaffold-003512/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:11:26.040098   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/skaffold-003512/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-332733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.1: (1m19.70458964s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (79.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-515356 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-515356 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.1: (1m10.302789852s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-332733 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [19e23b51-a115-4082-9821-08ad628d6d8c] Pending
helpers_test.go:344: "busybox" [19e23b51-a115-4082-9821-08ad628d6d8c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [19e23b51-a115-4082-9821-08ad628d6d8c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003483812s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-332733 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-332733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-332733 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-332733 --alsologtostderr -v=3
E0605 19:12:27.483344   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/skaffold-003512/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-332733 --alsologtostderr -v=3: (10.717556502s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-753965 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9c3ec8a4-772a-432a-a0ea-0a3c1728261e] Pending
helpers_test.go:344: "busybox" [9c3ec8a4-772a-432a-a0ea-0a3c1728261e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9c3ec8a4-772a-432a-a0ea-0a3c1728261e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003370939s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-753965 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-332733 -n no-preload-332733
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-332733 -n no-preload-332733: exit status 7 (102.644744ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-332733 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-332733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-332733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.1: (51.120140077s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-332733 -n no-preload-332733
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-753965 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-753965 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-753965 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-753965 --alsologtostderr -v=3: (11.029233033s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-753965 -n old-k8s-version-753965
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-753965 -n old-k8s-version-753965: exit status 7 (89.414664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-753965 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (101.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-753965 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-753965 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (1m41.174685322s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-753965 -n old-k8s-version-753965
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (101.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (33.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-125213 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-125213 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.1: (33.392271342s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (33.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-515356 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c8deb210-1f1c-4bac-8bca-40c632fe60db] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c8deb210-1f1c-4bac-8bca-40c632fe60db] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004252143s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-515356 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5kw5v" [0922fa36-eb89-476d-b850-ddbc52401741] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002971416s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-515356 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-515356 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-515356 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-515356 --alsologtostderr -v=3: (11.018344215s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5kw5v" [0922fa36-eb89-476d-b850-ddbc52401741] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003454252s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-332733 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-332733 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-332733 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-332733 -n no-preload-332733
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-332733 -n no-preload-332733: exit status 2 (324.301032ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-332733 -n no-preload-332733
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-332733 -n no-preload-332733: exit status 2 (336.106707ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-332733 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-332733 -n no-preload-332733
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-332733 -n no-preload-332733
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-515356 -n default-k8s-diff-port-515356
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-515356 -n default-k8s-diff-port-515356: exit status 7 (140.977134ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-515356 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-515356 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-515356 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.1: (54.443706484s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-515356 -n default-k8s-diff-port-515356
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (66.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-455519 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.1
E0605 19:13:49.404988   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/skaffold-003512/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-455519 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.1: (1m6.759329063s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (66.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-125213 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-125213 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.137243724s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-125213 --alsologtostderr -v=3
E0605 19:13:55.397440   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/addons-191833/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-125213 --alsologtostderr -v=3: (11.095069241s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-125213 -n newest-cni-125213
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-125213 -n newest-cni-125213: exit status 7 (93.869423ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-125213 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-125213 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-125213 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.1: (15.054640358s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-125213 -n newest-cni-125213
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-125213 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-125213 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-125213 -n newest-cni-125213
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-125213 -n newest-cni-125213: exit status 2 (293.262656ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-125213 -n newest-cni-125213
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-125213 -n newest-cni-125213: exit status 2 (286.007514ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-125213 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-125213 -n newest-cni-125213
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-125213 -n newest-cni-125213
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (62.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-102022 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-102022 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m2.937681867s)
--- PASS: TestNetworkPlugins/group/auto/Start (62.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-qj27w" [38ca295f-fbef-4159-8999-6c8a0f278e6a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003612296s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-mx86q" [75c9202d-6afc-442b-87d7-76135ca325bd] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00300636s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-qj27w" [38ca295f-fbef-4159-8999-6c8a0f278e6a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003063221s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-753965 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-mx86q" [75c9202d-6afc-442b-87d7-76135ca325bd] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004151492s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-515356 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-753965 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-753965 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-753965 -n old-k8s-version-753965
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-753965 -n old-k8s-version-753965: exit status 2 (309.714157ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-753965 -n old-k8s-version-753965
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-753965 -n old-k8s-version-753965: exit status 2 (311.97229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-753965 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-753965 -n old-k8s-version-753965
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-753965 -n old-k8s-version-753965
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-515356 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-515356 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-515356 -n default-k8s-diff-port-515356
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-515356 -n default-k8s-diff-port-515356: exit status 2 (353.417036ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-515356 -n default-k8s-diff-port-515356
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-515356 -n default-k8s-diff-port-515356: exit status 2 (375.81156ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-515356 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-515356 -n default-k8s-diff-port-515356
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-515356 -n default-k8s-diff-port-515356
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-455519 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ec42cd95-5579-4af6-964c-597acc90a356] Pending
helpers_test.go:344: "busybox" [ec42cd95-5579-4af6-964c-597acc90a356] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ec42cd95-5579-4af6-964c-597acc90a356] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003360806s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-455519 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (58.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-102022 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-102022 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (58.985797179s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (58.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (58.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-102022 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-102022 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (58.278642613s)
--- PASS: TestNetworkPlugins/group/calico/Start (58.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-455519 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-455519 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.09837753s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-455519 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-455519 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-455519 --alsologtostderr -v=3: (10.702351575s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-455519 -n embed-certs-455519
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-455519 -n embed-certs-455519: exit status 7 (83.109041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-455519 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-455519 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-455519 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.33.1: (50.833115673s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-455519 -n embed-certs-455519
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-102022 "pgrep -a kubelet"
I0605 19:15:27.559518   13279 config.go:182] Loaded profile config "auto-102022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-102022 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-gr8pb" [72ff1f02-661b-4b8c-8acf-9f450529a085] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-gr8pb" [72ff1f02-661b-4b8c-8acf-9f450529a085] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004372352s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-102022 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-102022 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-102022 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-fllj2" [b67442e4-4ad3-4282-a70b-cec3ff1a947b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003022205s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-x9vsv" [db0aba8a-5504-4d83-887a-59b720d08653] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003903942s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-102022 "pgrep -a kubelet"
I0605 19:15:57.332346   13279 config.go:182] Loaded profile config "kindnet-102022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-102022 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7v5bq" [abad92e6-dc56-497e-94f9-e94bad68df6c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-7v5bq" [abad92e6-dc56-497e-94f9-e94bad68df6c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004523836s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-102022 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-102022 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (51.376290875s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-102022 "pgrep -a kubelet"
I0605 19:15:57.996575   13279 config.go:182] Loaded profile config "calico-102022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-102022 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-2dx8l" [73ca21e1-3f29-4a28-99e4-cbd66eb40337] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-2dx8l" [73ca21e1-3f29-4a28-99e4-cbd66eb40337] Running
E0605 19:16:05.545075   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/skaffold-003512/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.00268318s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-trqjz" [d6ec9c4e-04e0-4b32-b628-f8d5363144ea] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003710972s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-trqjz" [d6ec9c4e-04e0-4b32-b628-f8d5363144ea] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002671248s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-455519 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-102022 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-102022 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-102022 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-102022 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-102022 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-102022 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-455519 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-455519 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-455519 -n embed-certs-455519
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-455519 -n embed-certs-455519: exit status 2 (335.671233ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-455519 -n embed-certs-455519
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-455519 -n embed-certs-455519: exit status 2 (322.789542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-455519 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-455519 -n embed-certs-455519
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-455519 -n embed-certs-455519
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (71.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-102022 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-102022 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m11.990764468s)
--- PASS: TestNetworkPlugins/group/false/Start (71.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-102022 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0605 19:16:33.247004   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/skaffold-003512/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-102022 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m10.556654044s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (47.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-102022 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-102022 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (47.463868436s)
--- PASS: TestNetworkPlugins/group/flannel/Start (47.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-102022 "pgrep -a kubelet"
I0605 19:16:49.337520   13279 config.go:182] Loaded profile config "custom-flannel-102022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-102022 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-h7b5t" [8e612c0f-477e-414b-a4ab-74859d691a7a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-h7b5t" [8e612c0f-477e-414b-a4ab-74859d691a7a] Running
E0605 19:16:56.761854   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/functional-390168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003601909s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-102022 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-102022 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-102022 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-102022 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-102022 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m7.736986338s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-l9flk" [ab94dfea-2507-4112-8a0d-44d76f21d1f3] Running
E0605 19:17:22.141019   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/no-preload-332733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003837547s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-102022 "pgrep -a kubelet"
I0605 19:17:27.806127   13279 config.go:182] Loaded profile config "flannel-102022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-102022 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-k9d9v" [e8393c94-adca-41b5-975f-c195ca1872e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-k9d9v" [e8393c94-adca-41b5-975f-c195ca1872e7] Running
E0605 19:17:32.383316   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/no-preload-332733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:17:32.528838   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/old-k8s-version-753965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:17:32.535275   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/old-k8s-version-753965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:17:32.546686   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/old-k8s-version-753965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:17:32.568023   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/old-k8s-version-753965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:17:32.609447   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/old-k8s-version-753965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:17:32.690898   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/old-k8s-version-753965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:17:32.852287   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/old-k8s-version-753965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:17:33.174001   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/old-k8s-version-753965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:17:33.815997   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/old-k8s-version-753965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0605 19:17:35.097660   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/old-k8s-version-753965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003834282s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-102022 "pgrep -a kubelet"
I0605 19:17:31.638169   13279 config.go:182] Loaded profile config "false-102022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-102022 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-2rs49" [da483057-b083-4fee-a989-07451e60af3c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-2rs49" [da483057-b083-4fee-a989-07451e60af3c] Running
E0605 19:17:37.659323   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/old-k8s-version-753965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.003560826s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-102022 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-102022 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-102022 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-102022 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-102022 exec deployment/netcat -- nslookup kubernetes.default
I0605 19:17:41.891195   13279 config.go:182] Loaded profile config "enable-default-cni-102022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
--- PASS: TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-102022 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-76c5n" [bb2cf6cb-56c2-4013-abab-ec2307add43e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-76c5n" [bb2cf6cb-56c2-4013-abab-ec2307add43e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.005798803s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-102022 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-102022 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-102022 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-102022 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-102022 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (67.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-102022 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-102022 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m7.42701661s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (67.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-102022 "pgrep -a kubelet"
I0605 19:18:28.159913   13279 config.go:182] Loaded profile config "bridge-102022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-102022 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7vf4c" [9c6e38d3-b256-427e-8b53-55f687293d88] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0605 19:18:28.659605   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/default-k8s-diff-port-515356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:344: "netcat-5d86dc444-7vf4c" [9c6e38d3-b256-427e-8b53-55f687293d88] Running
E0605 19:18:33.826990   13279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/no-preload-332733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003189402s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-102022 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-102022 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-102022 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-102022 "pgrep -a kubelet"
I0605 19:19:06.682633   13279 config.go:182] Loaded profile config "kubenet-102022": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.33.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-102022 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-bhm2q" [7c6f7e0c-a0d4-416d-a76f-fbc95f91d0c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-bhm2q" [7c6f7e0c-a0d4-416d-a76f-fbc95f91d0c0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.003396163s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-102022 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-102022 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-102022 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)

                                                
                                    

Test skip (22/347)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.33.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.33.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.33.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-692243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-692243
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-102022 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-102022

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-102022

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-102022

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-102022

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-102022

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-102022

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-102022

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-102022

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-102022

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-102022

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-102022

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-102022" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-102022" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-102022" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-102022" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-102022" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-102022" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-102022" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-102022" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-102022

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-102022

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-102022" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-102022" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-102022

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-102022

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-102022" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-102022" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-102022" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-102022" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-102022" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20889-6302/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Jun 2025 19:09:21 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-flag-197555
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20889-6302/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Jun 2025 19:07:31 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-191768
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20889-6302/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Jun 2025 19:09:13 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-609917
contexts:
- context:
cluster: force-systemd-flag-197555
extensions:
- extension:
last-update: Thu, 05 Jun 2025 19:09:21 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: force-systemd-flag-197555
name: force-systemd-flag-197555
- context:
cluster: kubernetes-upgrade-191768
user: kubernetes-upgrade-191768
name: kubernetes-upgrade-191768
- context:
cluster: pause-609917
extensions:
- extension:
last-update: Thu, 05 Jun 2025 19:09:13 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: pause-609917
name: pause-609917
current-context: force-systemd-flag-197555
kind: Config
preferences: {}
users:
- name: force-systemd-flag-197555
user:
client-certificate: /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/force-systemd-flag-197555/client.crt
client-key: /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/force-systemd-flag-197555/client.key
- name: kubernetes-upgrade-191768
user:
client-certificate: /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/kubernetes-upgrade-191768/client.crt
client-key: /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/kubernetes-upgrade-191768/client.key
- name: pause-609917
user:
client-certificate: /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/pause-609917/client.crt
client-key: /home/jenkins/minikube-integration/20889-6302/.minikube/profiles/pause-609917/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-102022

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-102022" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102022"

                                                
                                                
----------------------- debugLogs end: cilium-102022 [took: 3.398779641s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-102022" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-102022
--- SKIP: TestNetworkPlugins/group/cilium (3.55s)

                                                
                                    
Copied to clipboard