Test Report: Docker_Linux_docker_arm64 19678

                    
                      8ef5536409705b0cbf1ed8a719bbf7f792426b16:2024-09-20:36299
                    
                

Test fail (1/342)

Order failed test Duration
33 TestAddons/parallel/Registry 74.48
x
+
TestAddons/parallel/Registry (74.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 3.062162ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-84svt" [d2e45ba0-4b0a-4648-a233-1dfc5982c286] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003876396s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-s7k45" [4fbca207-de93-4adb-baa8-2219f829573b] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006434631s
addons_test.go:338: (dbg) Run:  kubectl --context addons-711398 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-711398 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-711398 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.132721189s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-711398 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-711398 ip
2024/09/20 19:35:05 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-711398 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-711398
helpers_test.go:235: (dbg) docker inspect addons-711398:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e42a3b36384066d8ca5004aa14f662e66bf1ada0e09552963e5d4c92df8a2e1",
	        "Created": "2024-09-20T19:21:51.507733754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 723627,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T19:21:51.662805735Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/5e42a3b36384066d8ca5004aa14f662e66bf1ada0e09552963e5d4c92df8a2e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e42a3b36384066d8ca5004aa14f662e66bf1ada0e09552963e5d4c92df8a2e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e42a3b36384066d8ca5004aa14f662e66bf1ada0e09552963e5d4c92df8a2e1/hosts",
	        "LogPath": "/var/lib/docker/containers/5e42a3b36384066d8ca5004aa14f662e66bf1ada0e09552963e5d4c92df8a2e1/5e42a3b36384066d8ca5004aa14f662e66bf1ada0e09552963e5d4c92df8a2e1-json.log",
	        "Name": "/addons-711398",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-711398:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-711398",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/85859bf9967404372e4a23dd6e47fd89fd64b995bb138837c37719a33c6400cc-init/diff:/var/lib/docker/overlay2/49b3229d349a779acfb3b52fb14a5968187f2ddeb7c959acb87eba75b03cb72a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/85859bf9967404372e4a23dd6e47fd89fd64b995bb138837c37719a33c6400cc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/85859bf9967404372e4a23dd6e47fd89fd64b995bb138837c37719a33c6400cc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/85859bf9967404372e4a23dd6e47fd89fd64b995bb138837c37719a33c6400cc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-711398",
	                "Source": "/var/lib/docker/volumes/addons-711398/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-711398",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-711398",
	                "name.minikube.sigs.k8s.io": "addons-711398",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7a47e25f1d6ae56c1b56db44ea99fa5ec6dc229d77e2e46ca82cf35f48b64148",
	            "SandboxKey": "/var/run/docker/netns/7a47e25f1d6a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-711398": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "714a7f9ad63b2b663211e1dda48960e7c9687032217f0d1bb937afc5ee3d88fa",
	                    "EndpointID": "a04c1d69bc10fdf487757b78b1693c3e171c4f31eb181fa9d42b199c09503f9f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-711398",
	                        "5e42a3b36384"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-711398 -n addons-711398
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-711398 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-711398 logs -n 25: (1.216720192s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-164565   | jenkins | v1.34.0 | 20 Sep 24 19:21 UTC |                     |
	|         | -p download-only-164565                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 20 Sep 24 19:21 UTC | 20 Sep 24 19:21 UTC |
	| delete  | -p download-only-164565                                                                     | download-only-164565   | jenkins | v1.34.0 | 20 Sep 24 19:21 UTC | 20 Sep 24 19:21 UTC |
	| start   | -o=json --download-only                                                                     | download-only-090878   | jenkins | v1.34.0 | 20 Sep 24 19:21 UTC |                     |
	|         | -p download-only-090878                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 20 Sep 24 19:21 UTC | 20 Sep 24 19:21 UTC |
	| delete  | -p download-only-090878                                                                     | download-only-090878   | jenkins | v1.34.0 | 20 Sep 24 19:21 UTC | 20 Sep 24 19:21 UTC |
	| delete  | -p download-only-164565                                                                     | download-only-164565   | jenkins | v1.34.0 | 20 Sep 24 19:21 UTC | 20 Sep 24 19:21 UTC |
	| delete  | -p download-only-090878                                                                     | download-only-090878   | jenkins | v1.34.0 | 20 Sep 24 19:21 UTC | 20 Sep 24 19:21 UTC |
	| start   | --download-only -p                                                                          | download-docker-499597 | jenkins | v1.34.0 | 20 Sep 24 19:21 UTC |                     |
	|         | download-docker-499597                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-499597                                                                   | download-docker-499597 | jenkins | v1.34.0 | 20 Sep 24 19:21 UTC | 20 Sep 24 19:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-896186   | jenkins | v1.34.0 | 20 Sep 24 19:21 UTC |                     |
	|         | binary-mirror-896186                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43279                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-896186                                                                     | binary-mirror-896186   | jenkins | v1.34.0 | 20 Sep 24 19:21 UTC | 20 Sep 24 19:21 UTC |
	| addons  | disable dashboard -p                                                                        | addons-711398          | jenkins | v1.34.0 | 20 Sep 24 19:21 UTC |                     |
	|         | addons-711398                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-711398          | jenkins | v1.34.0 | 20 Sep 24 19:21 UTC |                     |
	|         | addons-711398                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-711398 --wait=true                                                                | addons-711398          | jenkins | v1.34.0 | 20 Sep 24 19:21 UTC | 20 Sep 24 19:25 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-711398 addons disable                                                                | addons-711398          | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-711398 addons disable                                                                | addons-711398          | jenkins | v1.34.0 | 20 Sep 24 19:33 UTC | 20 Sep 24 19:34 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-711398 addons                                                                        | addons-711398          | jenkins | v1.34.0 | 20 Sep 24 19:34 UTC | 20 Sep 24 19:34 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-711398 addons                                                                        | addons-711398          | jenkins | v1.34.0 | 20 Sep 24 19:34 UTC | 20 Sep 24 19:34 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-711398          | jenkins | v1.34.0 | 20 Sep 24 19:34 UTC | 20 Sep 24 19:34 UTC |
	|         | -p addons-711398                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-711398 ssh cat                                                                       | addons-711398          | jenkins | v1.34.0 | 20 Sep 24 19:34 UTC | 20 Sep 24 19:34 UTC |
	|         | /opt/local-path-provisioner/pvc-9a9bf7c2-70be-4ebd-8920-1988957db53e_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-711398 addons disable                                                                | addons-711398          | jenkins | v1.34.0 | 20 Sep 24 19:34 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-711398 ip                                                                            | addons-711398          | jenkins | v1.34.0 | 20 Sep 24 19:35 UTC | 20 Sep 24 19:35 UTC |
	| addons  | addons-711398 addons disable                                                                | addons-711398          | jenkins | v1.34.0 | 20 Sep 24 19:35 UTC | 20 Sep 24 19:35 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:21:27
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:21:27.391334  723137 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:21:27.391502  723137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:21:27.391531  723137 out.go:358] Setting ErrFile to fd 2...
	I0920 19:21:27.391552  723137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:21:27.391826  723137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-715609/.minikube/bin
	I0920 19:21:27.392386  723137 out.go:352] Setting JSON to false
	I0920 19:21:27.393284  723137 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11039,"bootTime":1726849049,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0920 19:21:27.393384  723137 start.go:139] virtualization:  
	I0920 19:21:27.395096  723137 out.go:177] * [addons-711398] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 19:21:27.396788  723137 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 19:21:27.396968  723137 notify.go:220] Checking for updates...
	I0920 19:21:27.399623  723137 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:21:27.400883  723137 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-715609/kubeconfig
	I0920 19:21:27.402474  723137 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-715609/.minikube
	I0920 19:21:27.403632  723137 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 19:21:27.404923  723137 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:21:27.406440  723137 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:21:27.427139  723137 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:21:27.427282  723137 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:21:27.486388  723137 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 19:21:27.476652284 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:21:27.486500  723137 docker.go:318] overlay module found
	I0920 19:21:27.487840  723137 out.go:177] * Using the docker driver based on user configuration
	I0920 19:21:27.488896  723137 start.go:297] selected driver: docker
	I0920 19:21:27.488916  723137 start.go:901] validating driver "docker" against <nil>
	I0920 19:21:27.488929  723137 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:21:27.489608  723137 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:21:27.542189  723137 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 19:21:27.533021108 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:21:27.542399  723137 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 19:21:27.542621  723137 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:21:27.543871  723137 out.go:177] * Using Docker driver with root privileges
	I0920 19:21:27.545180  723137 cni.go:84] Creating CNI manager for ""
	I0920 19:21:27.545265  723137 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 19:21:27.545280  723137 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 19:21:27.545368  723137 start.go:340] cluster config:
	{Name:addons-711398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-711398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:21:27.546704  723137 out.go:177] * Starting "addons-711398" primary control-plane node in "addons-711398" cluster
	I0920 19:21:27.547732  723137 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 19:21:27.548991  723137 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0920 19:21:27.550372  723137 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 19:21:27.550422  723137 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-715609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 19:21:27.550435  723137 cache.go:56] Caching tarball of preloaded images
	I0920 19:21:27.550473  723137 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 19:21:27.550518  723137 preload.go:172] Found /home/jenkins/minikube-integration/19678-715609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 19:21:27.550528  723137 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 19:21:27.550896  723137 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/config.json ...
	I0920 19:21:27.550964  723137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/config.json: {Name:mk96f1d698a5d5182bc7f62f1616f96a768bada0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:21:27.565053  723137 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 19:21:27.565174  723137 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 19:21:27.565197  723137 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 19:21:27.565206  723137 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 19:21:27.565213  723137 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 19:21:27.565218  723137 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0920 19:21:44.777918  723137 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0920 19:21:44.777959  723137 cache.go:194] Successfully downloaded all kic artifacts
	I0920 19:21:44.777989  723137 start.go:360] acquireMachinesLock for addons-711398: {Name:mk21025134b424beb2eccb2fad371095a8edea53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:21:44.778133  723137 start.go:364] duration metric: took 120.818µs to acquireMachinesLock for "addons-711398"
	I0920 19:21:44.778164  723137 start.go:93] Provisioning new machine with config: &{Name:addons-711398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-711398 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 19:21:44.778252  723137 start.go:125] createHost starting for "" (driver="docker")
	I0920 19:21:44.780053  723137 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0920 19:21:44.780323  723137 start.go:159] libmachine.API.Create for "addons-711398" (driver="docker")
	I0920 19:21:44.780365  723137 client.go:168] LocalClient.Create starting
	I0920 19:21:44.780552  723137 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19678-715609/.minikube/certs/ca.pem
	I0920 19:21:45.066383  723137 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19678-715609/.minikube/certs/cert.pem
	I0920 19:21:45.725010  723137 cli_runner.go:164] Run: docker network inspect addons-711398 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 19:21:45.741026  723137 cli_runner.go:211] docker network inspect addons-711398 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 19:21:45.741125  723137 network_create.go:284] running [docker network inspect addons-711398] to gather additional debugging logs...
	I0920 19:21:45.741148  723137 cli_runner.go:164] Run: docker network inspect addons-711398
	W0920 19:21:45.758263  723137 cli_runner.go:211] docker network inspect addons-711398 returned with exit code 1
	I0920 19:21:45.758300  723137 network_create.go:287] error running [docker network inspect addons-711398]: docker network inspect addons-711398: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-711398 not found
	I0920 19:21:45.758315  723137 network_create.go:289] output of [docker network inspect addons-711398]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-711398 not found
	
	** /stderr **
	I0920 19:21:45.758438  723137 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 19:21:45.780740  723137 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001805d90}
	I0920 19:21:45.780792  723137 network_create.go:124] attempt to create docker network addons-711398 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0920 19:21:45.780852  723137 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-711398 addons-711398
	I0920 19:21:45.873983  723137 network_create.go:108] docker network addons-711398 192.168.49.0/24 created
	I0920 19:21:45.874016  723137 kic.go:121] calculated static IP "192.168.49.2" for the "addons-711398" container
	I0920 19:21:45.874092  723137 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 19:21:45.889537  723137 cli_runner.go:164] Run: docker volume create addons-711398 --label name.minikube.sigs.k8s.io=addons-711398 --label created_by.minikube.sigs.k8s.io=true
	I0920 19:21:45.912992  723137 oci.go:103] Successfully created a docker volume addons-711398
	I0920 19:21:45.913102  723137 cli_runner.go:164] Run: docker run --rm --name addons-711398-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-711398 --entrypoint /usr/bin/test -v addons-711398:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0920 19:21:47.780168  723137 cli_runner.go:217] Completed: docker run --rm --name addons-711398-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-711398 --entrypoint /usr/bin/test -v addons-711398:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (1.866949845s)
	I0920 19:21:47.780202  723137 oci.go:107] Successfully prepared a docker volume addons-711398
	I0920 19:21:47.780230  723137 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 19:21:47.780252  723137 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 19:21:47.780323  723137 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19678-715609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-711398:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 19:21:51.443826  723137 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19678-715609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-711398:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (3.663461475s)
	I0920 19:21:51.443859  723137 kic.go:203] duration metric: took 3.663604553s to extract preloaded images to volume ...
	W0920 19:21:51.443999  723137 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 19:21:51.444169  723137 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 19:21:51.493090  723137 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-711398 --name addons-711398 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-711398 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-711398 --network addons-711398 --ip 192.168.49.2 --volume addons-711398:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0920 19:21:51.833700  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Running}}
	I0920 19:21:51.857814  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Status}}
	I0920 19:21:51.880342  723137 cli_runner.go:164] Run: docker exec addons-711398 stat /var/lib/dpkg/alternatives/iptables
	I0920 19:21:51.953735  723137 oci.go:144] the created container "addons-711398" has a running status.
	I0920 19:21:51.953768  723137 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa...
	I0920 19:21:52.934149  723137 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 19:21:52.963453  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Status}}
	I0920 19:21:52.982108  723137 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 19:21:52.982129  723137 kic_runner.go:114] Args: [docker exec --privileged addons-711398 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 19:21:53.040012  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Status}}
	I0920 19:21:53.056862  723137 machine.go:93] provisionDockerMachine start ...
	I0920 19:21:53.056963  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:21:53.073636  723137 main.go:141] libmachine: Using SSH client type: native
	I0920 19:21:53.073924  723137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 19:21:53.073940  723137 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:21:53.219825  723137 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-711398
	
	I0920 19:21:53.219856  723137 ubuntu.go:169] provisioning hostname "addons-711398"
	I0920 19:21:53.219950  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:21:53.240832  723137 main.go:141] libmachine: Using SSH client type: native
	I0920 19:21:53.241105  723137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 19:21:53.241124  723137 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-711398 && echo "addons-711398" | sudo tee /etc/hostname
	I0920 19:21:53.397041  723137 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-711398
	
	I0920 19:21:53.397126  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:21:53.414962  723137 main.go:141] libmachine: Using SSH client type: native
	I0920 19:21:53.415209  723137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 19:21:53.415227  723137 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-711398' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-711398/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-711398' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:21:53.560348  723137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:21:53.560373  723137 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19678-715609/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-715609/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-715609/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-715609/.minikube}
	I0920 19:21:53.560404  723137 ubuntu.go:177] setting up certificates
	I0920 19:21:53.560414  723137 provision.go:84] configureAuth start
	I0920 19:21:53.560477  723137 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-711398
	I0920 19:21:53.578940  723137 provision.go:143] copyHostCerts
	I0920 19:21:53.579026  723137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-715609/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-715609/.minikube/ca.pem (1078 bytes)
	I0920 19:21:53.579140  723137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-715609/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-715609/.minikube/cert.pem (1123 bytes)
	I0920 19:21:53.579192  723137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-715609/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-715609/.minikube/key.pem (1679 bytes)
	I0920 19:21:53.579238  723137 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-715609/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-715609/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-715609/.minikube/certs/ca-key.pem org=jenkins.addons-711398 san=[127.0.0.1 192.168.49.2 addons-711398 localhost minikube]
	I0920 19:21:53.763714  723137 provision.go:177] copyRemoteCerts
	I0920 19:21:53.763794  723137 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:21:53.763840  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:21:53.780559  723137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa Username:docker}
	I0920 19:21:53.880773  723137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-715609/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:21:53.905369  723137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-715609/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 19:21:53.929129  723137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-715609/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 19:21:53.953134  723137 provision.go:87] duration metric: took 392.704135ms to configureAuth
	I0920 19:21:53.953161  723137 ubuntu.go:193] setting minikube options for container-runtime
	I0920 19:21:53.953376  723137 config.go:182] Loaded profile config "addons-711398": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 19:21:53.953442  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:21:53.970509  723137 main.go:141] libmachine: Using SSH client type: native
	I0920 19:21:53.970771  723137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 19:21:53.970787  723137 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 19:21:54.121257  723137 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0920 19:21:54.121279  723137 ubuntu.go:71] root file system type: overlay
	I0920 19:21:54.121404  723137 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 19:21:54.121471  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:21:54.138976  723137 main.go:141] libmachine: Using SSH client type: native
	I0920 19:21:54.139225  723137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 19:21:54.139306  723137 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 19:21:54.296585  723137 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 19:21:54.296672  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:21:54.313359  723137 main.go:141] libmachine: Using SSH client type: native
	I0920 19:21:54.313612  723137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 19:21:54.313636  723137 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 19:21:55.135441  723137 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:36.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-20 19:21:54.291909157 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0920 19:21:55.135475  723137 machine.go:96] duration metric: took 2.078589043s to provisionDockerMachine
	I0920 19:21:55.135492  723137 client.go:171] duration metric: took 10.355116446s to LocalClient.Create
	I0920 19:21:55.135508  723137 start.go:167] duration metric: took 10.355185933s to libmachine.API.Create "addons-711398"
	I0920 19:21:55.135520  723137 start.go:293] postStartSetup for "addons-711398" (driver="docker")
	I0920 19:21:55.135533  723137 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:21:55.135605  723137 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:21:55.135651  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:21:55.154141  723137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa Username:docker}
	I0920 19:21:55.258872  723137 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:21:55.262886  723137 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 19:21:55.262925  723137 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 19:21:55.262940  723137 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 19:21:55.262953  723137 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 19:21:55.262968  723137 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-715609/.minikube/addons for local assets ...
	I0920 19:21:55.263052  723137 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-715609/.minikube/files for local assets ...
	I0920 19:21:55.263094  723137 start.go:296] duration metric: took 127.565352ms for postStartSetup
	I0920 19:21:55.263474  723137 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-711398
	I0920 19:21:55.281798  723137 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/config.json ...
	I0920 19:21:55.282096  723137 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:21:55.282159  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:21:55.299831  723137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa Username:docker}
	I0920 19:21:55.397218  723137 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 19:21:55.402069  723137 start.go:128] duration metric: took 10.623800204s to createHost
	I0920 19:21:55.402093  723137 start.go:83] releasing machines lock for "addons-711398", held for 10.62394795s
	I0920 19:21:55.402177  723137 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-711398
	I0920 19:21:55.418979  723137 ssh_runner.go:195] Run: cat /version.json
	I0920 19:21:55.418996  723137 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:21:55.419058  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:21:55.419099  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:21:55.440389  723137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa Username:docker}
	I0920 19:21:55.447911  723137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa Username:docker}
	I0920 19:21:55.540042  723137 ssh_runner.go:195] Run: systemctl --version
	I0920 19:21:55.671136  723137 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 19:21:55.675552  723137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0920 19:21:55.703261  723137 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0920 19:21:55.703353  723137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:21:55.736315  723137 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 19:21:55.736383  723137 start.go:495] detecting cgroup driver to use...
	I0920 19:21:55.736431  723137 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 19:21:55.736592  723137 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:21:55.752920  723137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 19:21:55.762844  723137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 19:21:55.773307  723137 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 19:21:55.773427  723137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 19:21:55.783764  723137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 19:21:55.794242  723137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 19:21:55.804757  723137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 19:21:55.814960  723137 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:21:55.824445  723137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 19:21:55.834491  723137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 19:21:55.844447  723137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 19:21:55.854805  723137 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:21:55.863820  723137 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:21:55.872502  723137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:21:55.957503  723137 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 19:21:56.064661  723137 start.go:495] detecting cgroup driver to use...
	I0920 19:21:56.064706  723137 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 19:21:56.064769  723137 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 19:21:56.079337  723137 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0920 19:21:56.079426  723137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 19:21:56.093925  723137 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:21:56.110324  723137 ssh_runner.go:195] Run: which cri-dockerd
	I0920 19:21:56.114612  723137 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 19:21:56.124980  723137 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0920 19:21:56.144845  723137 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 19:21:56.246212  723137 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 19:21:56.341598  723137 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 19:21:56.341729  723137 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 19:21:56.363583  723137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:21:56.462843  723137 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 19:21:56.740432  723137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 19:21:56.753139  723137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 19:21:56.765397  723137 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 19:21:56.862674  723137 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 19:21:56.957070  723137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:21:57.048459  723137 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 19:21:57.063722  723137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 19:21:57.075884  723137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:21:57.158527  723137 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 19:21:57.226149  723137 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 19:21:57.226239  723137 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 19:21:57.229975  723137 start.go:563] Will wait 60s for crictl version
	I0920 19:21:57.230038  723137 ssh_runner.go:195] Run: which crictl
	I0920 19:21:57.233776  723137 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:21:57.270441  723137 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0920 19:21:57.270515  723137 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 19:21:57.293358  723137 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 19:21:57.320615  723137 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0920 19:21:57.320750  723137 cli_runner.go:164] Run: docker network inspect addons-711398 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 19:21:57.343824  723137 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 19:21:57.347657  723137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:21:57.358868  723137 kubeadm.go:883] updating cluster {Name:addons-711398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-711398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:21:57.358994  723137 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 19:21:57.359058  723137 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 19:21:57.377404  723137 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 19:21:57.377425  723137 docker.go:615] Images already preloaded, skipping extraction
	I0920 19:21:57.377494  723137 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 19:21:57.395122  723137 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 19:21:57.395143  723137 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:21:57.395152  723137 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0920 19:21:57.395267  723137 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-711398 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-711398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:21:57.395331  723137 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 19:21:57.446501  723137 cni.go:84] Creating CNI manager for ""
	I0920 19:21:57.446540  723137 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 19:21:57.446551  723137 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:21:57.446573  723137 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-711398 NodeName:addons-711398 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:21:57.446744  723137 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-711398"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:21:57.446825  723137 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:21:57.455776  723137 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:21:57.455848  723137 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:21:57.464817  723137 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0920 19:21:57.483136  723137 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:21:57.501157  723137 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0920 19:21:57.519635  723137 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0920 19:21:57.523365  723137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:21:57.534249  723137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:21:57.630642  723137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:21:57.646531  723137 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398 for IP: 192.168.49.2
	I0920 19:21:57.646596  723137 certs.go:194] generating shared ca certs ...
	I0920 19:21:57.646630  723137 certs.go:226] acquiring lock for ca certs: {Name:mka146aeb8849fa662afd098460ee50b76cdcd3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:21:57.646778  723137 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-715609/.minikube/ca.key
	I0920 19:21:58.101679  723137 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-715609/.minikube/ca.crt ...
	I0920 19:21:58.101711  723137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-715609/.minikube/ca.crt: {Name:mk69ee28a9294d68443622c5f147a8be9eb2e2c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:21:58.101914  723137 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-715609/.minikube/ca.key ...
	I0920 19:21:58.101930  723137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-715609/.minikube/ca.key: {Name:mk256dee300035c9b2b81542f780656aadb74bec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:21:58.102570  723137 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-715609/.minikube/proxy-client-ca.key
	I0920 19:21:58.361763  723137 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-715609/.minikube/proxy-client-ca.crt ...
	I0920 19:21:58.361794  723137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-715609/.minikube/proxy-client-ca.crt: {Name:mk3cc193656d1fed6845639ce18a1d53df71f4fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:21:58.362556  723137 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-715609/.minikube/proxy-client-ca.key ...
	I0920 19:21:58.362573  723137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-715609/.minikube/proxy-client-ca.key: {Name:mk7b22eaa279387750056e650cc130742f498b26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:21:58.363135  723137 certs.go:256] generating profile certs ...
	I0920 19:21:58.363204  723137 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.key
	I0920 19:21:58.363224  723137 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt with IP's: []
	I0920 19:21:58.544718  723137 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt ...
	I0920 19:21:58.544750  723137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: {Name:mk46d763dadcef5e2e8f9a813ad527b4958e2eae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:21:58.544962  723137 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.key ...
	I0920 19:21:58.544976  723137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.key: {Name:mk89fa1bda5da493f73eb566eed0799f6479246f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:21:58.545070  723137 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/apiserver.key.3b3594dd
	I0920 19:21:58.545096  723137 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/apiserver.crt.3b3594dd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0920 19:21:59.256455  723137 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/apiserver.crt.3b3594dd ...
	I0920 19:21:59.256487  723137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/apiserver.crt.3b3594dd: {Name:mkf93a8399d411e005cb995b5473756fe79068d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:21:59.257249  723137 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/apiserver.key.3b3594dd ...
	I0920 19:21:59.257269  723137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/apiserver.key.3b3594dd: {Name:mk34dae1678cd9d33f265d15cc56fec4b2987f6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:21:59.257371  723137 certs.go:381] copying /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/apiserver.crt.3b3594dd -> /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/apiserver.crt
	I0920 19:21:59.257458  723137 certs.go:385] copying /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/apiserver.key.3b3594dd -> /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/apiserver.key
	I0920 19:21:59.257515  723137 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/proxy-client.key
	I0920 19:21:59.257538  723137 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/proxy-client.crt with IP's: []
	I0920 19:21:59.662051  723137 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/proxy-client.crt ...
	I0920 19:21:59.662083  723137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/proxy-client.crt: {Name:mkf9ed1a1cbe47dfb4118289a791fe9d07070325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:21:59.662793  723137 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/proxy-client.key ...
	I0920 19:21:59.662813  723137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/proxy-client.key: {Name:mk9f93f8f45579477b9e1a14164c8861d4e2d6e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:21:59.663529  723137 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-715609/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 19:21:59.663576  723137 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-715609/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:21:59.663601  723137 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-715609/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:21:59.663628  723137 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-715609/.minikube/certs/key.pem (1679 bytes)
	I0920 19:21:59.664317  723137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-715609/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:21:59.690160  723137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-715609/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 19:21:59.715584  723137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-715609/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:21:59.740820  723137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-715609/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 19:21:59.766156  723137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 19:21:59.790742  723137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 19:21:59.816050  723137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:21:59.841551  723137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:21:59.866259  723137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-715609/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:21:59.891399  723137 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:21:59.910129  723137 ssh_runner.go:195] Run: openssl version
	I0920 19:21:59.915900  723137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:21:59.926060  723137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:21:59.929543  723137 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:21:59.929613  723137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:21:59.936824  723137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:21:59.946835  723137 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:21:59.950306  723137 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 19:21:59.950358  723137 kubeadm.go:392] StartCluster: {Name:addons-711398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-711398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:21:59.950545  723137 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 19:21:59.966637  723137 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:21:59.975741  723137 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:21:59.985067  723137 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 19:21:59.985132  723137 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:21:59.994501  723137 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:21:59.994522  723137 kubeadm.go:157] found existing configuration files:
	
	I0920 19:21:59.994597  723137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:22:00.005288  723137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:22:00.005374  723137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:22:00.064774  723137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:22:00.096975  723137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:22:00.097140  723137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:22:00.135435  723137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:22:00.169656  723137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:22:00.169776  723137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:22:00.184833  723137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:22:00.203029  723137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:22:00.203108  723137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:22:00.220543  723137 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 19:22:00.309807  723137 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 19:22:00.309895  723137 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:22:00.342830  723137 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0920 19:22:00.342946  723137 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0920 19:22:00.342992  723137 kubeadm.go:310] OS: Linux
	I0920 19:22:00.343097  723137 kubeadm.go:310] CGROUPS_CPU: enabled
	I0920 19:22:00.343181  723137 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0920 19:22:00.343253  723137 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0920 19:22:00.343333  723137 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0920 19:22:00.343384  723137 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0920 19:22:00.343490  723137 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0920 19:22:00.343576  723137 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0920 19:22:00.343654  723137 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0920 19:22:00.343732  723137 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0920 19:22:00.412748  723137 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:22:00.412867  723137 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:22:00.412969  723137 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 19:22:00.429535  723137 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:22:00.433801  723137 out.go:235]   - Generating certificates and keys ...
	I0920 19:22:00.433907  723137 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:22:00.434040  723137 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:22:01.151854  723137 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 19:22:02.085958  723137 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 19:22:02.587754  723137 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 19:22:02.715484  723137 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 19:22:02.981493  723137 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 19:22:02.981802  723137 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-711398 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 19:22:03.171622  723137 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 19:22:03.171963  723137 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-711398 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 19:22:03.392599  723137 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 19:22:03.845538  723137 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 19:22:04.307301  723137 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 19:22:04.307591  723137 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:22:04.631532  723137 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:22:05.347396  723137 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 19:22:06.535018  723137 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:22:06.853494  723137 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:22:07.642803  723137 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:22:07.643570  723137 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:22:07.646537  723137 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:22:07.649558  723137 out.go:235]   - Booting up control plane ...
	I0920 19:22:07.649656  723137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:22:07.649731  723137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:22:07.649809  723137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:22:07.660474  723137 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:22:07.667481  723137 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:22:07.667536  723137 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:22:07.771828  723137 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 19:22:07.771980  723137 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 19:22:08.773239  723137 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00167931s
	I0920 19:22:08.773331  723137 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 19:22:15.776303  723137 kubeadm.go:310] [api-check] The API server is healthy after 7.002998254s
	I0920 19:22:15.796684  723137 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 19:22:15.812586  723137 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 19:22:15.838528  723137 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 19:22:15.838726  723137 kubeadm.go:310] [mark-control-plane] Marking the node addons-711398 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 19:22:15.849813  723137 kubeadm.go:310] [bootstrap-token] Using token: gov4e8.ybo58bu4jqgk7v1j
	I0920 19:22:15.852540  723137 out.go:235]   - Configuring RBAC rules ...
	I0920 19:22:15.852668  723137 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 19:22:15.857300  723137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 19:22:15.865856  723137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 19:22:15.869915  723137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 19:22:15.875952  723137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 19:22:15.880273  723137 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 19:22:16.184900  723137 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 19:22:16.611289  723137 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 19:22:17.183097  723137 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 19:22:17.184266  723137 kubeadm.go:310] 
	I0920 19:22:17.184342  723137 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 19:22:17.184351  723137 kubeadm.go:310] 
	I0920 19:22:17.184428  723137 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 19:22:17.184439  723137 kubeadm.go:310] 
	I0920 19:22:17.184464  723137 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 19:22:17.184523  723137 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 19:22:17.184577  723137 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 19:22:17.184585  723137 kubeadm.go:310] 
	I0920 19:22:17.184638  723137 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 19:22:17.184651  723137 kubeadm.go:310] 
	I0920 19:22:17.184698  723137 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 19:22:17.184705  723137 kubeadm.go:310] 
	I0920 19:22:17.184757  723137 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 19:22:17.184838  723137 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 19:22:17.184910  723137 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 19:22:17.184918  723137 kubeadm.go:310] 
	I0920 19:22:17.185001  723137 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 19:22:17.185084  723137 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 19:22:17.185092  723137 kubeadm.go:310] 
	I0920 19:22:17.185175  723137 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token gov4e8.ybo58bu4jqgk7v1j \
	I0920 19:22:17.185281  723137 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d9817bded5b2df7a22de97c06b991f8c482a20a289feb8315a94f93fe733f2a \
	I0920 19:22:17.185313  723137 kubeadm.go:310] 	--control-plane 
	I0920 19:22:17.185323  723137 kubeadm.go:310] 
	I0920 19:22:17.185408  723137 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 19:22:17.185418  723137 kubeadm.go:310] 
	I0920 19:22:17.185499  723137 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token gov4e8.ybo58bu4jqgk7v1j \
	I0920 19:22:17.185603  723137 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d9817bded5b2df7a22de97c06b991f8c482a20a289feb8315a94f93fe733f2a 
	I0920 19:22:17.190169  723137 kubeadm.go:310] W0920 19:22:00.288785    1807 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:22:17.190489  723137 kubeadm.go:310] W0920 19:22:00.290550    1807 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:22:17.190719  723137 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0920 19:22:17.190833  723137 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:22:17.190862  723137 cni.go:84] Creating CNI manager for ""
	I0920 19:22:17.190884  723137 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 19:22:17.195533  723137 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:22:17.198284  723137 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:22:17.207374  723137 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:22:17.226065  723137 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:22:17.226193  723137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:22:17.226276  723137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-711398 minikube.k8s.io/updated_at=2024_09_20T19_22_17_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=addons-711398 minikube.k8s.io/primary=true
	I0920 19:22:17.367531  723137 ops.go:34] apiserver oom_adj: -16
	I0920 19:22:17.367638  723137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:22:17.868108  723137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:22:18.368162  723137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:22:18.868572  723137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:22:19.367766  723137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:22:19.868638  723137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:22:20.367915  723137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:22:20.868495  723137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:22:21.368446  723137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:22:21.468871  723137 kubeadm.go:1113] duration metric: took 4.242722067s to wait for elevateKubeSystemPrivileges
	I0920 19:22:21.468896  723137 kubeadm.go:394] duration metric: took 21.518544137s to StartCluster
	I0920 19:22:21.468912  723137 settings.go:142] acquiring lock: {Name:mk489fae9706e26496450fd05dacf08ba58ec1e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:22:21.469027  723137 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19678-715609/kubeconfig
	I0920 19:22:21.469413  723137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-715609/kubeconfig: {Name:mk865c6d1bfbd69f4aebff691c82d6c1986ead8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:22:21.470242  723137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 19:22:21.470266  723137 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 19:22:21.470503  723137 config.go:182] Loaded profile config "addons-711398": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 19:22:21.470552  723137 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 19:22:21.470643  723137 addons.go:69] Setting yakd=true in profile "addons-711398"
	I0920 19:22:21.470663  723137 addons.go:234] Setting addon yakd=true in "addons-711398"
	I0920 19:22:21.470690  723137 host.go:66] Checking if "addons-711398" exists ...
	I0920 19:22:21.471164  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Status}}
	I0920 19:22:21.471648  723137 addons.go:69] Setting cloud-spanner=true in profile "addons-711398"
	I0920 19:22:21.471668  723137 addons.go:234] Setting addon cloud-spanner=true in "addons-711398"
	I0920 19:22:21.471692  723137 host.go:66] Checking if "addons-711398" exists ...
	I0920 19:22:21.471747  723137 addons.go:69] Setting metrics-server=true in profile "addons-711398"
	I0920 19:22:21.471762  723137 addons.go:234] Setting addon metrics-server=true in "addons-711398"
	I0920 19:22:21.471784  723137 host.go:66] Checking if "addons-711398" exists ...
	I0920 19:22:21.472143  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Status}}
	I0920 19:22:21.472278  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Status}}
	I0920 19:22:21.472606  723137 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-711398"
	I0920 19:22:21.472625  723137 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-711398"
	I0920 19:22:21.472647  723137 host.go:66] Checking if "addons-711398" exists ...
	I0920 19:22:21.473056  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Status}}
	I0920 19:22:21.476184  723137 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-711398"
	I0920 19:22:21.476250  723137 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-711398"
	I0920 19:22:21.476280  723137 host.go:66] Checking if "addons-711398" exists ...
	I0920 19:22:21.477920  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Status}}
	I0920 19:22:21.485029  723137 addons.go:69] Setting registry=true in profile "addons-711398"
	I0920 19:22:21.485102  723137 addons.go:234] Setting addon registry=true in "addons-711398"
	I0920 19:22:21.485152  723137 host.go:66] Checking if "addons-711398" exists ...
	I0920 19:22:21.485664  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Status}}
	I0920 19:22:21.500150  723137 addons.go:69] Setting default-storageclass=true in profile "addons-711398"
	I0920 19:22:21.500193  723137 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-711398"
	I0920 19:22:21.500546  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Status}}
	I0920 19:22:21.508202  723137 addons.go:69] Setting storage-provisioner=true in profile "addons-711398"
	I0920 19:22:21.508287  723137 addons.go:234] Setting addon storage-provisioner=true in "addons-711398"
	I0920 19:22:21.508358  723137 host.go:66] Checking if "addons-711398" exists ...
	I0920 19:22:21.508876  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Status}}
	I0920 19:22:21.523958  723137 addons.go:69] Setting gcp-auth=true in profile "addons-711398"
	I0920 19:22:21.524001  723137 mustload.go:65] Loading cluster: addons-711398
	I0920 19:22:21.524267  723137 config.go:182] Loaded profile config "addons-711398": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 19:22:21.524546  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Status}}
	I0920 19:22:21.534288  723137 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-711398"
	I0920 19:22:21.534323  723137 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-711398"
	I0920 19:22:21.534678  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Status}}
	I0920 19:22:21.574831  723137 addons.go:69] Setting ingress=true in profile "addons-711398"
	I0920 19:22:21.574872  723137 addons.go:234] Setting addon ingress=true in "addons-711398"
	I0920 19:22:21.574919  723137 host.go:66] Checking if "addons-711398" exists ...
	I0920 19:22:21.575439  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Status}}
	I0920 19:22:21.597985  723137 addons.go:69] Setting ingress-dns=true in profile "addons-711398"
	I0920 19:22:21.598022  723137 addons.go:234] Setting addon ingress-dns=true in "addons-711398"
	I0920 19:22:21.598075  723137 host.go:66] Checking if "addons-711398" exists ...
	I0920 19:22:21.598580  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Status}}
	I0920 19:22:21.608147  723137 addons.go:69] Setting volcano=true in profile "addons-711398"
	I0920 19:22:21.608185  723137 addons.go:234] Setting addon volcano=true in "addons-711398"
	I0920 19:22:21.608223  723137 host.go:66] Checking if "addons-711398" exists ...
	I0920 19:22:21.608745  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Status}}
	I0920 19:22:21.627817  723137 addons.go:69] Setting inspektor-gadget=true in profile "addons-711398"
	I0920 19:22:21.627852  723137 addons.go:234] Setting addon inspektor-gadget=true in "addons-711398"
	I0920 19:22:21.627892  723137 host.go:66] Checking if "addons-711398" exists ...
	I0920 19:22:21.628451  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Status}}
	I0920 19:22:21.637098  723137 addons.go:69] Setting volumesnapshots=true in profile "addons-711398"
	I0920 19:22:21.637142  723137 addons.go:234] Setting addon volumesnapshots=true in "addons-711398"
	I0920 19:22:21.637180  723137 host.go:66] Checking if "addons-711398" exists ...
	I0920 19:22:21.637680  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Status}}
	I0920 19:22:21.652467  723137 out.go:177] * Verifying Kubernetes components...
	I0920 19:22:21.680728  723137 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 19:22:21.685847  723137 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 19:22:21.685880  723137 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 19:22:21.685956  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:22:21.700244  723137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:22:21.708123  723137 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 19:22:21.711016  723137 addons.go:234] Setting addon default-storageclass=true in "addons-711398"
	I0920 19:22:21.712994  723137 host.go:66] Checking if "addons-711398" exists ...
	I0920 19:22:21.711201  723137 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 19:22:21.711282  723137 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 19:22:21.711292  723137 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:22:21.712883  723137 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 19:22:21.712914  723137 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 19:22:21.713668  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Status}}
	I0920 19:22:21.718803  723137 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-711398"
	I0920 19:22:21.723910  723137 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:22:21.725650  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:22:21.728180  723137 host.go:66] Checking if "addons-711398" exists ...
	I0920 19:22:21.728742  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Status}}
	I0920 19:22:21.745522  723137 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 19:22:21.745973  723137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 19:22:21.748017  723137 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 19:22:21.728998  723137 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 19:22:21.762349  723137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 19:22:21.761914  723137 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 19:22:21.762550  723137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 19:22:21.762811  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:22:21.761926  723137 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:22:21.774899  723137 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 19:22:21.775796  723137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 19:22:21.776061  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:22:21.794468  723137 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 19:22:21.762749  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:22:21.797543  723137 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:22:21.797609  723137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:22:21.797715  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:22:21.835483  723137 host.go:66] Checking if "addons-711398" exists ...
	I0920 19:22:21.844170  723137 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 19:22:21.844320  723137 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 19:22:21.844359  723137 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 19:22:21.848448  723137 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 19:22:21.848604  723137 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 19:22:21.848651  723137 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 19:22:21.852844  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:22:21.877952  723137 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 19:22:21.883038  723137 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 19:22:21.890941  723137 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 19:22:21.901962  723137 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0920 19:22:21.901975  723137 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 19:22:21.904609  723137 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 19:22:21.906195  723137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 19:22:21.906264  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:22:21.911587  723137 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 19:22:21.911664  723137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 19:22:21.911771  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:22:21.931305  723137 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 19:22:21.934190  723137 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:22:21.934215  723137 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:22:21.934287  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:22:21.936325  723137 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0920 19:22:21.938977  723137 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 19:22:21.940163  723137 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 19:22:21.942685  723137 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 19:22:21.942708  723137 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 19:22:21.942789  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:22:21.952129  723137 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 19:22:21.952156  723137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 19:22:21.952228  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:22:21.978877  723137 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0920 19:22:21.981745  723137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa Username:docker}
	I0920 19:22:21.984949  723137 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 19:22:21.984982  723137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0920 19:22:21.985051  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:22:21.991957  723137 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 19:22:21.995560  723137 out.go:177]   - Using image docker.io/busybox:stable
	I0920 19:22:22.000225  723137 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 19:22:22.000319  723137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 19:22:22.000431  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:22:22.028225  723137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa Username:docker}
	I0920 19:22:22.039714  723137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa Username:docker}
	I0920 19:22:22.040607  723137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa Username:docker}
	I0920 19:22:22.071257  723137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa Username:docker}
	I0920 19:22:22.079807  723137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa Username:docker}
	I0920 19:22:22.114319  723137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa Username:docker}
	I0920 19:22:22.120045  723137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa Username:docker}
	I0920 19:22:22.120736  723137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa Username:docker}
	I0920 19:22:22.124875  723137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa Username:docker}
	I0920 19:22:22.128415  723137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:22:22.152303  723137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa Username:docker}
	I0920 19:22:22.155877  723137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa Username:docker}
	W0920 19:22:22.157144  723137 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0920 19:22:22.157177  723137 retry.go:31] will retry after 370.686783ms: ssh: handshake failed: EOF
	W0920 19:22:22.157667  723137 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0920 19:22:22.157700  723137 retry.go:31] will retry after 188.036599ms: ssh: handshake failed: EOF
	I0920 19:22:22.169132  723137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa Username:docker}
	I0920 19:22:22.169366  723137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa Username:docker}
	I0920 19:22:22.753184  723137 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 19:22:22.753210  723137 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 19:22:22.850117  723137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 19:22:22.995420  723137 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 19:22:22.995498  723137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 19:22:23.004718  723137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:22:23.004800  723137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 19:22:23.010816  723137 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 19:22:23.010920  723137 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 19:22:23.014199  723137 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 19:22:23.014293  723137 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 19:22:23.031020  723137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:22:23.100793  723137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 19:22:23.175050  723137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 19:22:23.225822  723137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 19:22:23.237466  723137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 19:22:23.240388  723137 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 19:22:23.240474  723137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 19:22:23.243677  723137 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 19:22:23.243758  723137 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 19:22:23.418104  723137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 19:22:23.534530  723137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 19:22:23.558242  723137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:22:23.558323  723137 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:22:23.650134  723137 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 19:22:23.650215  723137 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 19:22:23.665189  723137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:22:23.679302  723137 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 19:22:23.679377  723137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 19:22:23.687384  723137 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 19:22:23.687459  723137 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 19:22:23.810740  723137 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 19:22:23.810811  723137 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 19:22:23.838514  723137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:22:23.838583  723137 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:22:23.879514  723137 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 19:22:23.879594  723137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 19:22:23.890607  723137 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 19:22:23.890700  723137 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 19:22:23.972664  723137 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 19:22:23.972741  723137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 19:22:23.975074  723137 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.229074799s)
	I0920 19:22:23.975148  723137 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0920 19:22:23.975256  723137 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.846822117s)
	I0920 19:22:23.977184  723137 node_ready.go:35] waiting up to 6m0s for node "addons-711398" to be "Ready" ...
	I0920 19:22:23.980846  723137 node_ready.go:49] node "addons-711398" has status "Ready":"True"
	I0920 19:22:23.980876  723137 node_ready.go:38] duration metric: took 3.506242ms for node "addons-711398" to be "Ready" ...
	I0920 19:22:23.980887  723137 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:22:23.991391  723137 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sdxgq" in "kube-system" namespace to be "Ready" ...
	I0920 19:22:24.037676  723137 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 19:22:24.037757  723137 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 19:22:24.049044  723137 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 19:22:24.049072  723137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 19:22:24.195575  723137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:22:24.210992  723137 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 19:22:24.211016  723137 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 19:22:24.290237  723137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 19:22:24.365116  723137 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 19:22:24.365196  723137 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 19:22:24.391825  723137 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 19:22:24.391913  723137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 19:22:24.453723  723137 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 19:22:24.453823  723137 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 19:22:24.480863  723137 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-711398" context rescaled to 1 replicas
	I0920 19:22:24.703324  723137 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 19:22:24.703399  723137 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 19:22:24.741519  723137 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 19:22:24.741592  723137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 19:22:24.765867  723137 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 19:22:24.765947  723137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 19:22:24.921692  723137 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 19:22:24.921797  723137 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 19:22:25.020479  723137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 19:22:25.028617  723137 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 19:22:25.028710  723137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 19:22:25.062926  723137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 19:22:25.082442  723137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.232243717s)
	I0920 19:22:25.132887  723137 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 19:22:25.132974  723137 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 19:22:25.413092  723137 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 19:22:25.413163  723137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 19:22:25.999441  723137 pod_ready.go:103] pod "coredns-7c65d6cfc9-sdxgq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:22:26.027195  723137 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 19:22:26.027273  723137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 19:22:26.378115  723137 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 19:22:26.378199  723137 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 19:22:26.429569  723137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.398460588s)
	I0920 19:22:27.046890  723137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 19:22:28.001524  723137 pod_ready.go:103] pod "coredns-7c65d6cfc9-sdxgq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:22:28.901838  723137 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 19:22:28.901924  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:22:28.929516  723137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa Username:docker}
	I0920 19:22:29.209493  723137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.108663595s)
	I0920 19:22:29.901687  723137 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 19:22:30.081717  723137 addons.go:234] Setting addon gcp-auth=true in "addons-711398"
	I0920 19:22:30.081777  723137 host.go:66] Checking if "addons-711398" exists ...
	I0920 19:22:30.082290  723137 cli_runner.go:164] Run: docker container inspect addons-711398 --format={{.State.Status}}
	I0920 19:22:30.123649  723137 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 19:22:30.123722  723137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-711398
	I0920 19:22:30.154133  723137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/addons-711398/id_rsa Username:docker}
	I0920 19:22:30.501505  723137 pod_ready.go:103] pod "coredns-7c65d6cfc9-sdxgq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:22:32.502110  723137 pod_ready.go:103] pod "coredns-7c65d6cfc9-sdxgq" in "kube-system" namespace has status "Ready":"False"
	I0920 19:22:34.503774  723137 pod_ready.go:98] pod "coredns-7c65d6cfc9-sdxgq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 19:22:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 19:22:22 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 19:22:22 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 19:22:22 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 19:22:22 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-20 19:22:22 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 19:22:23 +0000 UTC,FinishedAt:2024-09-20 19:22:33 +0000 UTC,ContainerID:docker://c2f4e989d93187e0930f326143bfcabea56025671d1547cccb537f118fd1b9de,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://c2f4e989d93187e0930f326143bfcabea56025671d1547cccb537f118fd1b9de Started:0x400000fa30 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x40017663c0} {Name:kube-api-access-wjqzh MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0x40017663d0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 19:22:34.503996  723137 pod_ready.go:82] duration metric: took 10.512518616s for pod "coredns-7c65d6cfc9-sdxgq" in "kube-system" namespace to be "Ready" ...
	E0920 19:22:34.504027  723137 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-sdxgq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 19:22:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 19:22:22 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 19:22:22 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 19:22:22 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 19:22:22 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-20 19:22:22 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 19:22:23 +0000 UTC,FinishedAt:2024-09-20 19:22:33 +0000 UTC,ContainerID:docker://c2f4e989d93187e0930f326143bfcabea56025671d1547cccb537f118fd1b9de,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://c2f4e989d93187e0930f326143bfcabea56025671d1547cccb537f118fd1b9de Started:0x400000fa30 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x40017663c0} {Name:kube-api-access-wjqzh MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0x40017663d0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 19:22:34.504067  723137 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wkx75" in "kube-system" namespace to be "Ready" ...
	I0920 19:22:34.905471  723137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.730337131s)
	I0920 19:22:34.905590  723137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.679685902s)
	I0920 19:22:34.905601  723137 addons.go:475] Verifying addon ingress=true in "addons-711398"
	I0920 19:22:34.905883  723137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.668327621s)
	I0920 19:22:34.905964  723137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (11.487784602s)
	I0920 19:22:34.906000  723137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.371383803s)
	I0920 19:22:34.906007  723137 addons.go:475] Verifying addon registry=true in "addons-711398"
	I0920 19:22:34.906325  723137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.241063773s)
	I0920 19:22:34.906555  723137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.710953896s)
	I0920 19:22:34.906568  723137 addons.go:475] Verifying addon metrics-server=true in "addons-711398"
	I0920 19:22:34.906658  723137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.616335068s)
	I0920 19:22:34.906812  723137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.886229053s)
	W0920 19:22:34.907123  723137 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 19:22:34.907142  723137 retry.go:31] will retry after 137.355606ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 19:22:34.906878  723137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.84387586s)
	I0920 19:22:34.909585  723137 out.go:177] * Verifying ingress addon...
	I0920 19:22:34.911339  723137 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-711398 service yakd-dashboard -n yakd-dashboard
	
	I0920 19:22:34.911472  723137 out.go:177] * Verifying registry addon...
	I0920 19:22:34.918896  723137 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 19:22:34.919902  723137 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 19:22:34.959109  723137 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 19:22:34.959194  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:34.960195  723137 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 19:22:34.960264  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:35.044920  723137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 19:22:35.434299  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:35.435403  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:35.944560  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:35.947813  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:35.975825  723137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.928849421s)
	I0920 19:22:35.975910  723137 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-711398"
	I0920 19:22:35.976278  723137 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.85260414s)
	I0920 19:22:35.979103  723137 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 19:22:35.979244  723137 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 19:22:35.983263  723137 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 19:22:35.983722  723137 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 19:22:35.985805  723137 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 19:22:35.985871  723137 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 19:22:35.992707  723137 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 19:22:35.992731  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:36.093964  723137 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 19:22:36.094052  723137 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 19:22:36.158566  723137 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 19:22:36.158633  723137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 19:22:36.272998  723137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 19:22:36.425286  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:36.425558  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:36.489245  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:36.510855  723137 pod_ready.go:103] pod "coredns-7c65d6cfc9-wkx75" in "kube-system" namespace has status "Ready":"False"
	I0920 19:22:36.925764  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:36.926302  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:36.988359  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:37.396065  723137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.351040279s)
	I0920 19:22:37.424831  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:37.426051  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:37.489044  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:37.807034  723137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.533925293s)
	I0920 19:22:37.810190  723137 addons.go:475] Verifying addon gcp-auth=true in "addons-711398"
	I0920 19:22:37.813196  723137 out.go:177] * Verifying gcp-auth addon...
	I0920 19:22:37.816718  723137 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 19:22:37.831723  723137 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 19:22:37.930875  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:37.931181  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:37.988393  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:38.425342  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:38.426228  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:38.511006  723137 pod_ready.go:103] pod "coredns-7c65d6cfc9-wkx75" in "kube-system" namespace has status "Ready":"False"
	I0920 19:22:38.526304  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:38.924544  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:38.925622  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:38.989267  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:39.424223  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:39.425263  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:39.496407  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:39.924327  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:39.924865  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:39.988737  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:40.423590  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:40.424541  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:40.512132  723137 pod_ready.go:103] pod "coredns-7c65d6cfc9-wkx75" in "kube-system" namespace has status "Ready":"False"
	I0920 19:22:40.524547  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:40.924713  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:40.926018  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:40.988474  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:41.424579  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:41.424898  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:41.501552  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:41.925393  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:41.926324  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:41.989622  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:42.424972  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:42.425938  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:42.488885  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:42.926548  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:42.927918  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:42.990656  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:43.011343  723137 pod_ready.go:103] pod "coredns-7c65d6cfc9-wkx75" in "kube-system" namespace has status "Ready":"False"
	I0920 19:22:43.425582  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:43.426834  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:43.488610  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:43.923799  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:43.924783  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:43.988462  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:44.424338  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:44.425377  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:44.494302  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:44.925669  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:44.927287  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:44.989510  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:45.020512  723137 pod_ready.go:103] pod "coredns-7c65d6cfc9-wkx75" in "kube-system" namespace has status "Ready":"False"
	I0920 19:22:45.427373  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:45.428377  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:45.488886  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:45.924307  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:45.925226  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:45.988804  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:46.423045  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:46.425865  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:46.490986  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:46.925033  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:46.927671  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:46.988784  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:47.424185  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:47.425633  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:47.489078  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:47.510048  723137 pod_ready.go:103] pod "coredns-7c65d6cfc9-wkx75" in "kube-system" namespace has status "Ready":"False"
	I0920 19:22:47.926538  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:47.927498  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:47.988850  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:48.425836  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:48.426906  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:48.526560  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:48.924583  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:48.925099  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:22:48.989188  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:49.423487  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:49.424192  723137 kapi.go:107] duration metric: took 14.504291666s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 19:22:49.489231  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:49.511038  723137 pod_ready.go:103] pod "coredns-7c65d6cfc9-wkx75" in "kube-system" namespace has status "Ready":"False"
	I0920 19:22:49.923446  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:49.988753  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:50.426152  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:50.488949  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:50.929151  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:50.988498  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:51.423899  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:51.488294  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:51.511093  723137 pod_ready.go:103] pod "coredns-7c65d6cfc9-wkx75" in "kube-system" namespace has status "Ready":"False"
	I0920 19:22:51.923766  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:51.988227  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:52.423749  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:52.488776  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:52.923729  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:52.989054  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:53.423329  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:53.489889  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:53.923247  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:53.989095  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:54.011059  723137 pod_ready.go:103] pod "coredns-7c65d6cfc9-wkx75" in "kube-system" namespace has status "Ready":"False"
	I0920 19:22:54.423030  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:54.488705  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:54.923817  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:54.989821  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:55.423747  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:55.488032  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:55.924028  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:55.989228  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:56.011405  723137 pod_ready.go:93] pod "coredns-7c65d6cfc9-wkx75" in "kube-system" namespace has status "Ready":"True"
	I0920 19:22:56.011438  723137 pod_ready.go:82] duration metric: took 21.507331769s for pod "coredns-7c65d6cfc9-wkx75" in "kube-system" namespace to be "Ready" ...
	I0920 19:22:56.011450  723137 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-711398" in "kube-system" namespace to be "Ready" ...
	I0920 19:22:56.018812  723137 pod_ready.go:93] pod "etcd-addons-711398" in "kube-system" namespace has status "Ready":"True"
	I0920 19:22:56.018854  723137 pod_ready.go:82] duration metric: took 7.395999ms for pod "etcd-addons-711398" in "kube-system" namespace to be "Ready" ...
	I0920 19:22:56.018867  723137 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-711398" in "kube-system" namespace to be "Ready" ...
	I0920 19:22:56.024829  723137 pod_ready.go:93] pod "kube-apiserver-addons-711398" in "kube-system" namespace has status "Ready":"True"
	I0920 19:22:56.024867  723137 pod_ready.go:82] duration metric: took 5.991321ms for pod "kube-apiserver-addons-711398" in "kube-system" namespace to be "Ready" ...
	I0920 19:22:56.024886  723137 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-711398" in "kube-system" namespace to be "Ready" ...
	I0920 19:22:56.032379  723137 pod_ready.go:93] pod "kube-controller-manager-addons-711398" in "kube-system" namespace has status "Ready":"True"
	I0920 19:22:56.032405  723137 pod_ready.go:82] duration metric: took 7.510622ms for pod "kube-controller-manager-addons-711398" in "kube-system" namespace to be "Ready" ...
	I0920 19:22:56.032418  723137 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mfhq6" in "kube-system" namespace to be "Ready" ...
	I0920 19:22:56.039728  723137 pod_ready.go:93] pod "kube-proxy-mfhq6" in "kube-system" namespace has status "Ready":"True"
	I0920 19:22:56.039756  723137 pod_ready.go:82] duration metric: took 7.32816ms for pod "kube-proxy-mfhq6" in "kube-system" namespace to be "Ready" ...
	I0920 19:22:56.039768  723137 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-711398" in "kube-system" namespace to be "Ready" ...
	I0920 19:22:56.408542  723137 pod_ready.go:93] pod "kube-scheduler-addons-711398" in "kube-system" namespace has status "Ready":"True"
	I0920 19:22:56.408614  723137 pod_ready.go:82] duration metric: took 368.837436ms for pod "kube-scheduler-addons-711398" in "kube-system" namespace to be "Ready" ...
	I0920 19:22:56.408641  723137 pod_ready.go:39] duration metric: took 32.427741025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:22:56.408690  723137 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:22:56.408778  723137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:22:56.423446  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:56.437067  723137 api_server.go:72] duration metric: took 34.966767008s to wait for apiserver process to appear ...
	I0920 19:22:56.437134  723137 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:22:56.437170  723137 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:22:56.446641  723137 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0920 19:22:56.447710  723137 api_server.go:141] control plane version: v1.31.1
	I0920 19:22:56.447764  723137 api_server.go:131] duration metric: took 10.609489ms to wait for apiserver health ...
	I0920 19:22:56.447789  723137 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:22:56.488836  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:56.616975  723137 system_pods.go:59] 17 kube-system pods found
	I0920 19:22:56.617054  723137 system_pods.go:61] "coredns-7c65d6cfc9-wkx75" [146c07a3-8d88-4d28-939b-2957fd0149a8] Running
	I0920 19:22:56.617083  723137 system_pods.go:61] "csi-hostpath-attacher-0" [8a937cd0-1930-43bc-9010-e3ba31744cc9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 19:22:56.617127  723137 system_pods.go:61] "csi-hostpath-resizer-0" [1b574b1e-0912-4ac9-9d3d-ea851899a0a4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 19:22:56.617160  723137 system_pods.go:61] "csi-hostpathplugin-h4mkl" [19a2a282-55e8-40b5-ae37-bf12079581fc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 19:22:56.617184  723137 system_pods.go:61] "etcd-addons-711398" [47e53eda-7c09-462f-8b0e-96ac60cf38ec] Running
	I0920 19:22:56.617208  723137 system_pods.go:61] "kube-apiserver-addons-711398" [1c65ff8c-35da-4071-8839-b21a66a7726b] Running
	I0920 19:22:56.617244  723137 system_pods.go:61] "kube-controller-manager-addons-711398" [fffb5723-8da9-4e25-aa1c-2d27d4046b93] Running
	I0920 19:22:56.617272  723137 system_pods.go:61] "kube-ingress-dns-minikube" [4b74ac90-44f6-410c-b476-f1c8a7d84b90] Running
	I0920 19:22:56.617299  723137 system_pods.go:61] "kube-proxy-mfhq6" [dbfa9eee-c6dc-4c83-897e-7c31e823e7a8] Running
	I0920 19:22:56.617317  723137 system_pods.go:61] "kube-scheduler-addons-711398" [c7a95707-c06b-434d-aa70-0bb07505c575] Running
	I0920 19:22:56.617353  723137 system_pods.go:61] "metrics-server-84c5f94fbc-cwvt9" [2c61ac4c-97fc-4401-96cb-98c474378544] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:22:56.617381  723137 system_pods.go:61] "nvidia-device-plugin-daemonset-wqj2f" [706a55de-ce14-438b-bd2d-4793efdd30e7] Running
	I0920 19:22:56.617405  723137 system_pods.go:61] "registry-66c9cd494c-84svt" [d2e45ba0-4b0a-4648-a233-1dfc5982c286] Running
	I0920 19:22:56.617429  723137 system_pods.go:61] "registry-proxy-s7k45" [4fbca207-de93-4adb-baa8-2219f829573b] Running
	I0920 19:22:56.617477  723137 system_pods.go:61] "snapshot-controller-56fcc65765-2xmsd" [10818109-c1c2-4475-b2aa-cdafb73ad5ae] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 19:22:56.617541  723137 system_pods.go:61] "snapshot-controller-56fcc65765-p8cms" [88bde4a5-50de-4db1-b9b6-f0dccace2981] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 19:22:56.617562  723137 system_pods.go:61] "storage-provisioner" [d1f5793e-a351-4154-9fda-390dc358bc7b] Running
	I0920 19:22:56.617586  723137 system_pods.go:74] duration metric: took 169.776973ms to wait for pod list to return data ...
	I0920 19:22:56.617619  723137 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:22:56.809313  723137 default_sa.go:45] found service account: "default"
	I0920 19:22:56.809338  723137 default_sa.go:55] duration metric: took 191.695446ms for default service account to be created ...
	I0920 19:22:56.809349  723137 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:22:56.935432  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:56.989828  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:57.016394  723137 system_pods.go:86] 17 kube-system pods found
	I0920 19:22:57.016470  723137 system_pods.go:89] "coredns-7c65d6cfc9-wkx75" [146c07a3-8d88-4d28-939b-2957fd0149a8] Running
	I0920 19:22:57.016499  723137 system_pods.go:89] "csi-hostpath-attacher-0" [8a937cd0-1930-43bc-9010-e3ba31744cc9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 19:22:57.016543  723137 system_pods.go:89] "csi-hostpath-resizer-0" [1b574b1e-0912-4ac9-9d3d-ea851899a0a4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 19:22:57.016576  723137 system_pods.go:89] "csi-hostpathplugin-h4mkl" [19a2a282-55e8-40b5-ae37-bf12079581fc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 19:22:57.016598  723137 system_pods.go:89] "etcd-addons-711398" [47e53eda-7c09-462f-8b0e-96ac60cf38ec] Running
	I0920 19:22:57.016621  723137 system_pods.go:89] "kube-apiserver-addons-711398" [1c65ff8c-35da-4071-8839-b21a66a7726b] Running
	I0920 19:22:57.016658  723137 system_pods.go:89] "kube-controller-manager-addons-711398" [fffb5723-8da9-4e25-aa1c-2d27d4046b93] Running
	I0920 19:22:57.016689  723137 system_pods.go:89] "kube-ingress-dns-minikube" [4b74ac90-44f6-410c-b476-f1c8a7d84b90] Running
	I0920 19:22:57.016711  723137 system_pods.go:89] "kube-proxy-mfhq6" [dbfa9eee-c6dc-4c83-897e-7c31e823e7a8] Running
	I0920 19:22:57.016738  723137 system_pods.go:89] "kube-scheduler-addons-711398" [c7a95707-c06b-434d-aa70-0bb07505c575] Running
	I0920 19:22:57.016776  723137 system_pods.go:89] "metrics-server-84c5f94fbc-cwvt9" [2c61ac4c-97fc-4401-96cb-98c474378544] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:22:57.016811  723137 system_pods.go:89] "nvidia-device-plugin-daemonset-wqj2f" [706a55de-ce14-438b-bd2d-4793efdd30e7] Running
	I0920 19:22:57.016845  723137 system_pods.go:89] "registry-66c9cd494c-84svt" [d2e45ba0-4b0a-4648-a233-1dfc5982c286] Running
	I0920 19:22:57.016894  723137 system_pods.go:89] "registry-proxy-s7k45" [4fbca207-de93-4adb-baa8-2219f829573b] Running
	I0920 19:22:57.016924  723137 system_pods.go:89] "snapshot-controller-56fcc65765-2xmsd" [10818109-c1c2-4475-b2aa-cdafb73ad5ae] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 19:22:57.016951  723137 system_pods.go:89] "snapshot-controller-56fcc65765-p8cms" [88bde4a5-50de-4db1-b9b6-f0dccace2981] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 19:22:57.016974  723137 system_pods.go:89] "storage-provisioner" [d1f5793e-a351-4154-9fda-390dc358bc7b] Running
	I0920 19:22:57.017009  723137 system_pods.go:126] duration metric: took 207.652299ms to wait for k8s-apps to be running ...
	I0920 19:22:57.017037  723137 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:22:57.017133  723137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:22:57.031921  723137 system_svc.go:56] duration metric: took 14.875385ms WaitForService to wait for kubelet
	I0920 19:22:57.031949  723137 kubeadm.go:582] duration metric: took 35.561654655s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:22:57.031968  723137 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:22:57.208986  723137 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0920 19:22:57.209025  723137 node_conditions.go:123] node cpu capacity is 2
	I0920 19:22:57.209039  723137 node_conditions.go:105] duration metric: took 177.065174ms to run NodePressure ...
	I0920 19:22:57.209051  723137 start.go:241] waiting for startup goroutines ...
	I0920 19:22:57.423973  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:57.489235  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:57.924462  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:58.025797  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:58.424428  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:58.498694  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:58.926110  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:58.989870  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:59.432846  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:59.489679  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:22:59.923982  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:22:59.992103  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:00.426414  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:00.497531  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:00.924629  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:00.993184  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:01.423937  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:01.489293  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:01.924809  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:01.988491  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:02.423268  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:02.489664  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:02.923491  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:02.988707  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:03.423905  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:03.488454  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:03.923842  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:03.989419  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:04.424489  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:04.489609  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:04.924064  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:04.988713  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:05.425721  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:05.488613  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:05.925842  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:05.995290  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:06.423936  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:06.488463  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:06.924508  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:06.989318  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:07.426568  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:07.488845  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:07.923536  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:07.988937  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:08.423201  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:08.488532  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:08.927963  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:09.029843  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:09.423888  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:09.488670  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:09.924573  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:09.989768  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:10.426084  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:10.494864  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:10.923807  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:10.988247  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:11.423772  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:11.488917  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:11.922964  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:11.988407  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:12.426149  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:12.488730  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:12.923555  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:12.989240  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:13.425618  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:13.525680  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:13.924812  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:13.988374  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:14.428342  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:14.489121  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:14.924747  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:14.989218  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:15.425819  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:15.527663  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:15.923105  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:15.988370  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:16.424571  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:16.489007  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:16.923859  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:16.988923  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:17.425472  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:17.526206  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:17.924292  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:17.989083  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:18.423624  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:18.525237  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:18.922847  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:18.988307  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:19.423325  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:19.489146  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:19.924279  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:19.992550  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:20.423417  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:20.506202  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:20.923806  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:20.990245  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:21.424088  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:21.489314  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:21.925382  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:21.989047  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:22.423635  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:22.488028  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:22.923506  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:22.989827  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:23.423775  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:23.488389  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:23.938979  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:23.988572  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:24.424928  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:24.489199  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:24.923777  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:24.989232  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:25.424116  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:25.489181  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:25.932821  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:26.028783  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:26.423588  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:26.488962  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:26.923308  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:26.989430  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:27.428905  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:27.530264  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:23:27.923787  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:27.989065  723137 kapi.go:107] duration metric: took 52.005339361s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 19:23:28.423225  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:28.928167  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:29.423354  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:29.923371  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:30.423951  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:30.923275  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:31.423301  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:31.926340  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:32.423368  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:32.924520  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:33.423344  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:33.923735  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:34.423367  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:34.923449  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:35.423595  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:35.923282  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:36.422849  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:36.923739  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:37.428271  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:37.923695  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:38.424382  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:38.924416  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:39.424732  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:39.923627  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:40.423479  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:40.924430  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:41.423243  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:41.923985  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:42.424793  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:42.923786  723137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:23:43.422789  723137 kapi.go:107] duration metric: took 1m8.503894524s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 19:23:59.820701  723137 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 19:23:59.820726  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:00.333843  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:00.820608  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:01.320643  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:01.821217  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:02.319773  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:02.820035  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:03.321076  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:03.821392  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:04.320240  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:04.820981  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:05.320303  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:05.820252  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:06.320543  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:06.819876  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:07.320750  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:07.820289  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:08.320111  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:08.820725  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:09.320847  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:09.820018  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:10.320993  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:10.820969  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:11.321337  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:11.819649  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:12.320156  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:12.820716  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:13.321044  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:13.820865  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:14.319994  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:14.820067  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:15.321052  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:15.822371  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:16.320390  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:16.820810  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:17.321132  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:17.819924  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:18.320874  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:18.821186  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:19.320292  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:19.820736  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:20.320719  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:20.820455  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:21.320260  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:21.820970  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:22.320374  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:22.819989  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:23.321164  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:23.820349  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:24.327977  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:24.820814  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:25.320832  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:25.820373  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:26.320027  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:26.822105  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:27.321417  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:27.820859  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:28.320291  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:28.819909  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:29.320804  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:29.820794  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:30.319932  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:30.820265  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:31.320189  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:31.820260  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:32.320665  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:32.820150  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:33.320395  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:33.820604  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:34.320833  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:34.820254  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:35.320939  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:35.820311  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:36.319875  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:36.825879  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:37.320519  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:37.820673  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:38.320351  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:38.819819  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:39.321173  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:39.820300  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:40.319998  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:40.821966  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:41.321272  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:41.824393  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:42.320216  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:42.819715  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:43.320404  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:43.819974  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:44.321261  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:44.821502  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:45.320668  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:45.820759  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:46.321279  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:46.819834  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:47.320897  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:47.819972  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:48.321078  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:48.820703  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:49.320529  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:49.821274  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:50.320195  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:50.820005  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:51.320207  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:51.820673  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:52.319919  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:52.820029  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:53.321138  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:53.820798  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:54.320476  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:54.820763  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:55.320674  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:55.820893  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:56.320151  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:56.820865  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:57.320777  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:57.820966  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:58.321309  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:58.820025  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:59.321111  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:24:59.821489  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:00.321238  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:00.820899  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:01.320764  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:01.821085  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:02.321219  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:02.820569  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:03.320395  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:03.819726  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:04.320156  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:04.820248  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:05.320112  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:05.821192  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:06.320792  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:06.819959  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:07.320753  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:07.821170  723137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:08.320686  723137 kapi.go:107] duration metric: took 2m30.503965778s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 19:25:08.323479  723137 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-711398 cluster.
	I0920 19:25:08.326671  723137 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 19:25:08.329370  723137 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 19:25:08.332210  723137 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, storage-provisioner-rancher, volcano, cloud-spanner, ingress-dns, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 19:25:08.334899  723137 addons.go:510] duration metric: took 2m46.864347589s for enable addons: enabled=[nvidia-device-plugin storage-provisioner storage-provisioner-rancher volcano cloud-spanner ingress-dns metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 19:25:08.334950  723137 start.go:246] waiting for cluster config update ...
	I0920 19:25:08.334972  723137 start.go:255] writing updated cluster config ...
	I0920 19:25:08.335280  723137 ssh_runner.go:195] Run: rm -f paused
	I0920 19:25:08.755213  723137 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:25:08.758521  723137 out.go:177] * Done! kubectl is now configured to use "addons-711398" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 20 19:34:34 addons-711398 dockerd[1280]: time="2024-09-20T19:34:34.797297476Z" level=info msg="ignoring event" container=2d29101c51549ecd00266081d2bd4a2aab2d0b2ee5ef6feaea94114d47d9ed5e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 19:34:34 addons-711398 dockerd[1280]: time="2024-09-20T19:34:34.817313571Z" level=info msg="ignoring event" container=7ee5628d32d2ab468d6c8a0c0ff0a56f3d727b0c36cefe0b405d4cd092e3b093 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 19:34:35 addons-711398 cri-dockerd[1536]: time="2024-09-20T19:34:35Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 20 19:34:37 addons-711398 dockerd[1280]: time="2024-09-20T19:34:37.092789084Z" level=info msg="ignoring event" container=d236dd6bb94bbbf59152d56092d24df67ba04978f85ad6a6517400e51dd1d6ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 19:34:41 addons-711398 dockerd[1280]: time="2024-09-20T19:34:41.314142297Z" level=info msg="ignoring event" container=0ecb5a23c8f61658c2f69090da93fb8148ab0fa15ab16607b3be2614f5e11a8b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 19:34:41 addons-711398 dockerd[1280]: time="2024-09-20T19:34:41.491624115Z" level=info msg="ignoring event" container=53711243fb3a331b602f8f22aa3078231d02c28848bac1ca345718a67c77974e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 19:34:42 addons-711398 cri-dockerd[1536]: time="2024-09-20T19:34:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e74930821d151aa178ef0ab1e682ca456ed20698a08b10b2e663c9fa924cfe3a/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 20 19:34:42 addons-711398 dockerd[1280]: time="2024-09-20T19:34:42.420791053Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 20 19:34:42 addons-711398 cri-dockerd[1536]: time="2024-09-20T19:34:42Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 20 19:34:43 addons-711398 dockerd[1280]: time="2024-09-20T19:34:43.089840272Z" level=info msg="ignoring event" container=c7762e73321c298423a81db357637945f8c3a226386b9af9f7101d6e5be8ae7c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 19:34:44 addons-711398 dockerd[1280]: time="2024-09-20T19:34:44.413061213Z" level=info msg="ignoring event" container=e74930821d151aa178ef0ab1e682ca456ed20698a08b10b2e663c9fa924cfe3a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 19:34:46 addons-711398 cri-dockerd[1536]: time="2024-09-20T19:34:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/996a1d75cb94210a2acca8efa688515cf74b6123bea3d4144d29757f52cdba7a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 20 19:34:47 addons-711398 cri-dockerd[1536]: time="2024-09-20T19:34:47Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Sep 20 19:34:47 addons-711398 dockerd[1280]: time="2024-09-20T19:34:47.345593737Z" level=info msg="ignoring event" container=8f6e27093614c60a9883dc5860fc67adb2b8aa2b7402a74eee86fe1f486f1792 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 19:34:49 addons-711398 dockerd[1280]: time="2024-09-20T19:34:49.542942939Z" level=info msg="ignoring event" container=996a1d75cb94210a2acca8efa688515cf74b6123bea3d4144d29757f52cdba7a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 19:34:49 addons-711398 dockerd[1280]: time="2024-09-20T19:34:49.721290138Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 20 19:34:49 addons-711398 dockerd[1280]: time="2024-09-20T19:34:49.723952973Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 20 19:34:50 addons-711398 cri-dockerd[1536]: time="2024-09-20T19:34:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/320fbcc25d086789d2f69cff8245784997a830201ad2d228faf2ddc1eabc24dd/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 20 19:34:51 addons-711398 dockerd[1280]: time="2024-09-20T19:34:51.167419695Z" level=info msg="ignoring event" container=7feea648c1fd1eafa1e03da6fb34b88c4518267d0c6f291c2d6bee7780aaf4ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 19:34:52 addons-711398 dockerd[1280]: time="2024-09-20T19:34:52.591349849Z" level=info msg="ignoring event" container=320fbcc25d086789d2f69cff8245784997a830201ad2d228faf2ddc1eabc24dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 19:35:05 addons-711398 dockerd[1280]: time="2024-09-20T19:35:05.168868614Z" level=info msg="ignoring event" container=5ce24c30324644d9885da4a436fddb8de565634d76ff26e54ed48c701348a4d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 19:35:05 addons-711398 dockerd[1280]: time="2024-09-20T19:35:05.817673849Z" level=info msg="ignoring event" container=f8330d199113ddd1713739370a5f6157c8a097384c87635281582db2e6df1481 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 19:35:05 addons-711398 dockerd[1280]: time="2024-09-20T19:35:05.937452841Z" level=info msg="ignoring event" container=c8ddd2e4784afd8f3ed04ab0294c18ddc8d819776f9b2f7cc18c6fb4fd526e2c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 19:35:06 addons-711398 dockerd[1280]: time="2024-09-20T19:35:06.059903292Z" level=info msg="ignoring event" container=ddeeebb4aa51f0ef66301296cee0c5ad999ec876d63ed1c728ad744604d508ba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 19:35:06 addons-711398 dockerd[1280]: time="2024-09-20T19:35:06.197559740Z" level=info msg="ignoring event" container=4bbe09082ceb686bb0d488dbd36a4f09f2cce8cc55fc07a83dd0a1b4d56d40e9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7feea648c1fd1       fc9db2894f4e4                                                                                                                17 seconds ago      Exited              helper-pod                0                   320fbcc25d086       helper-pod-delete-pvc-9a9bf7c2-70be-4ebd-8920-1988957db53e
	8f6e27093614c       busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140                                              20 seconds ago      Exited              busybox                   0                   996a1d75cb942       test-local-path
	c7762e73321c2       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              25 seconds ago      Exited              helper-pod                0                   e74930821d151       helper-pod-create-pvc-9a9bf7c2-70be-4ebd-8920-1988957db53e
	d236dd6bb94bb       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            32 seconds ago      Exited              gadget                    7                   45c2e44bacafd       gadget-hp9tl
	1443146b883ce       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago      Running             gcp-auth                  0                   284d18d0bc73e       gcp-auth-89d5ffd79-pjkck
	a8ebfa27beee5       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                0                   7399d2b7f25e4       ingress-nginx-controller-bc57996ff-bttnp
	8cbf5f6e7b549       420193b27261a                                                                                                                11 minutes ago      Exited              patch                     1                   1a5c846bd0818       ingress-nginx-admission-patch-n2285
	6c3796126af42       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   c1e2c04149434       ingress-nginx-admission-create-7mls9
	4790c00475ce1       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner    0                   479fc5ca9ddfa       local-path-provisioner-86d989889c-kd5bv
	cff85728082c6       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        12 minutes ago      Running             metrics-server            0                   915dca5d2307e       metrics-server-84c5f94fbc-cwvt9
	6122dfaa35254       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns      0                   bc6978c099a95       kube-ingress-dns-minikube
	85e0a9205042f       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator    0                   76f7ff38b323f       cloud-spanner-emulator-769b77f747-g4968
	dcbbde731377a       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner       0                   ec9228fda243b       storage-provisioner
	62308f2fbb336       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                   0                   8b736fcdb1c85       coredns-7c65d6cfc9-wkx75
	060c8c64d4225       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                0                   89c24f17713f3       kube-proxy-mfhq6
	3a5df68bb4480       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                      0                   4055336d587e2       etcd-addons-711398
	23d47197484db       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler            0                   a2a2b7d9fca75       kube-scheduler-addons-711398
	cc9b700d2590d       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   6ffa0b97ce81a       kube-controller-manager-addons-711398
	c13c74eade154       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver            0                   1db898b9f0ca2       kube-apiserver-addons-711398
	
	
	==> controller_ingress [a8ebfa27beee] <==
	W0920 19:23:43.064743       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0920 19:23:43.065064       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0920 19:23:43.077472       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0920 19:23:44.061825       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0920 19:23:44.082127       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0920 19:23:44.094506       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0920 19:23:44.113454       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"ba8ec272-bb20-4175-beff-a07133e0c4aa", APIVersion:"v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0920 19:23:44.131591       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"27e10ab0-8982-43c2-986f-b93bc500847d", APIVersion:"v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0920 19:23:44.141224       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"38ec8c9e-3006-48a5-9910-103dcd0e7a5c", APIVersion:"v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0920 19:23:45.296232       7 nginx.go:317] "Starting NGINX process"
	I0920 19:23:45.296434       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0920 19:23:45.299135       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0920 19:23:45.299335       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0920 19:23:45.342721       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0920 19:23:45.343113       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-bttnp"
	I0920 19:23:45.352708       7 controller.go:213] "Backend successfully reloaded"
	I0920 19:23:45.352905       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0920 19:23:45.353062       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-bttnp", UID:"62c6c992-731f-4ffa-b530-a98b5b5103e2", APIVersion:"v1", ResourceVersion:"1281", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0920 19:23:45.431169       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-bttnp" node="addons-711398"
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [62308f2fbb33] <==
	Trace[274955516]: [30.001618174s] [30.001618174s] END
	[INFO] plugin/kubernetes: Trace[986262570]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 19:22:23.618) (total time: 30000ms):
	Trace[986262570]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:22:53.619)
	Trace[986262570]: [30.00086798s] [30.00086798s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2068232413]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 19:22:23.618) (total time: 30000ms):
	Trace[2068232413]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:22:53.619)
	Trace[2068232413]: [30.000736348s] [30.000736348s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	[INFO] Reloading complete
	[INFO] 127.0.0.1:41072 - 576 "HINFO IN 7204552164135868152.7786840848119861607. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.037833317s
	[INFO] 10.244.0.25:55282 - 35073 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000386198s
	[INFO] 10.244.0.25:35547 - 26219 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000541944s
	[INFO] 10.244.0.25:51421 - 30851 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000105819s
	[INFO] 10.244.0.25:56747 - 11012 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000288535s
	[INFO] 10.244.0.25:53372 - 3255 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000155393s
	[INFO] 10.244.0.25:42882 - 8912 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000087252s
	[INFO] 10.244.0.25:58952 - 23561 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00258796s
	[INFO] 10.244.0.25:54367 - 55125 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002294109s
	[INFO] 10.244.0.25:54268 - 57761 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002267475s
	[INFO] 10.244.0.25:51107 - 5993 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001940804s
	
	
	==> describe nodes <==
	Name:               addons-711398
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-711398
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=addons-711398
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T19_22_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-711398
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 19:22:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-711398
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:35:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:34:52 +0000   Fri, 20 Sep 2024 19:22:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:34:52 +0000   Fri, 20 Sep 2024 19:22:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:34:52 +0000   Fri, 20 Sep 2024 19:22:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:34:52 +0000   Fri, 20 Sep 2024 19:22:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-711398
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 21e1fd8952384521b2369f9b1931dd39
	  System UUID:                f977aaa1-dba3-4af0-92ab-c521c8270934
	  Boot ID:                    32c222cc-d06c-4f68-9fc3-59cd35d0dbd2
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  default                     cloud-spanner-emulator-769b77f747-g4968     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-hp9tl                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-pjkck                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-bttnp    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-wkx75                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-711398                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-711398                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-711398       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-mfhq6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-711398                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-cwvt9             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-kd5bv     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             460Mi (5%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-711398 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x7 over 12m)  kubelet          Node addons-711398 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-711398 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-711398 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-711398 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-711398 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-711398 event: Registered Node addons-711398 in Controller
	
	
	==> dmesg <==
	[Sep20 18:55] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [3a5df68bb448] <==
	{"level":"info","ts":"2024-09-20T19:22:09.964874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-09-20T19:22:09.965218Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-20T19:22:10.684122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-20T19:22:10.684366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T19:22:10.684507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-20T19:22:10.684642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T19:22:10.684748Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T19:22:10.684878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-20T19:22:10.684963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T19:22:10.688275Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-711398 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T19:22:10.688568Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:22:10.689047Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:22:10.692116Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:22:10.693345Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:22:10.705338Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-20T19:22:10.700502Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:22:10.701670Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:22:10.702236Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T19:22:10.712660Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:22:10.713901Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T19:22:10.723183Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T19:22:10.728131Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:32:11.098166Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1887}
	{"level":"info","ts":"2024-09-20T19:32:11.196142Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1887,"took":"97.112102ms","hash":126441347,"current-db-size-bytes":8859648,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4943872,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-20T19:32:11.196203Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":126441347,"revision":1887,"compact-revision":-1}
	
	
	==> gcp-auth [1443146b883c] <==
	2024/09/20 19:25:07 GCP Auth Webhook started!
	2024/09/20 19:25:25 Ready to marshal response ...
	2024/09/20 19:25:25 Ready to write response ...
	2024/09/20 19:25:25 Ready to marshal response ...
	2024/09/20 19:25:25 Ready to write response ...
	2024/09/20 19:25:50 Ready to marshal response ...
	2024/09/20 19:25:50 Ready to write response ...
	2024/09/20 19:25:50 Ready to marshal response ...
	2024/09/20 19:25:50 Ready to write response ...
	2024/09/20 19:25:50 Ready to marshal response ...
	2024/09/20 19:25:50 Ready to write response ...
	2024/09/20 19:34:04 Ready to marshal response ...
	2024/09/20 19:34:04 Ready to write response ...
	2024/09/20 19:34:05 Ready to marshal response ...
	2024/09/20 19:34:05 Ready to write response ...
	2024/09/20 19:34:19 Ready to marshal response ...
	2024/09/20 19:34:19 Ready to write response ...
	2024/09/20 19:34:41 Ready to marshal response ...
	2024/09/20 19:34:41 Ready to write response ...
	2024/09/20 19:34:41 Ready to marshal response ...
	2024/09/20 19:34:41 Ready to write response ...
	2024/09/20 19:34:50 Ready to marshal response ...
	2024/09/20 19:34:50 Ready to write response ...
	
	
	==> kernel <==
	 19:35:07 up  3:17,  0 users,  load average: 0.85, 0.87, 1.46
	Linux addons-711398 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [c13c74eade15] <==
	I0920 19:25:41.157756       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0920 19:25:41.190607       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0920 19:25:41.259618       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0920 19:25:41.376496       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0920 19:25:41.894327       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0920 19:25:41.905403       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0920 19:25:41.906749       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0920 19:25:41.997080       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0920 19:25:42.260287       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0920 19:25:42.596745       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0920 19:34:13.042365       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0920 19:34:34.390560       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:34:34.390608       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:34:34.414927       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:34:34.414988       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:34:34.438134       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:34:34.438185       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:34:34.453428       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:34:34.453481       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:34:34.569737       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:34:34.569781       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0920 19:34:35.439002       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0920 19:34:35.570435       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0920 19:34:35.587110       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0920 19:35:06.406181       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [cc9b700d2590] <==
	W0920 19:34:38.726082       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:34:38.726126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:34:39.192866       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:34:39.192911       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:34:42.951835       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:34:42.951877       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:34:43.652739       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:34:43.652793       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:34:44.492378       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:34:44.492425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 19:34:51.042636       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="4.521µs"
	I0920 19:34:51.658842       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0920 19:34:51.658881       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 19:34:51.895321       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0920 19:34:51.896039       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 19:34:52.312716       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-711398"
	W0920 19:34:54.234836       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:34:54.234879       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:34:54.745117       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:34:54.745161       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:34:56.723805       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:34:56.723848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 19:35:05.760448       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="4.48µs"
	W0920 19:35:06.879455       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:35:06.879500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [060c8c64d422] <==
	I0920 19:22:23.348320       1 server_linux.go:66] "Using iptables proxy"
	I0920 19:22:23.515835       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0920 19:22:23.515917       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 19:22:23.545966       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 19:22:23.546030       1 server_linux.go:169] "Using iptables Proxier"
	I0920 19:22:23.548675       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 19:22:23.549066       1 server.go:483] "Version info" version="v1.31.1"
	I0920 19:22:23.549097       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:22:23.550901       1 config.go:199] "Starting service config controller"
	I0920 19:22:23.550929       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 19:22:23.550978       1 config.go:105] "Starting endpoint slice config controller"
	I0920 19:22:23.550983       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 19:22:23.553928       1 config.go:328] "Starting node config controller"
	I0920 19:22:23.553943       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 19:22:23.651908       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 19:22:23.651997       1 shared_informer.go:320] Caches are synced for service config
	I0920 19:22:23.654148       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [23d47197484d] <==
	W0920 19:22:14.919140       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 19:22:14.919226       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:22:14.919399       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 19:22:14.919479       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:22:14.919604       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 19:22:14.919706       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:22:14.919943       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 19:22:14.920030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:22:14.920284       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 19:22:14.920771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:22:14.920364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 19:22:14.921227       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:22:14.920403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 19:22:14.921388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:22:14.920448       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 19:22:14.921675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 19:22:14.920503       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 19:22:14.922143       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 19:22:14.921801       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 19:22:14.920563       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0920 19:22:14.920633       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 19:22:14.922589       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0920 19:22:14.922656       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	E0920 19:22:14.922752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0920 19:22:16.307911       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 19:34:54 addons-711398 kubelet[2326]: E0920 19:34:54.508572    2326 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-hp9tl_gadget(53daf043-7f53-406d-ac9d-0815258bc4b1)\"" pod="gadget/gadget-hp9tl" podUID="53daf043-7f53-406d-ac9d-0815258bc4b1"
	Sep 20 19:34:56 addons-711398 kubelet[2326]: I0920 19:34:56.517960    2326 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02081004-efea-460d-8eec-c014cf579521" path="/var/lib/kubelet/pods/02081004-efea-460d-8eec-c014cf579521/volumes"
	Sep 20 19:34:59 addons-711398 kubelet[2326]: E0920 19:34:59.510125    2326 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="f8da1040-f0d0-4fec-9895-578b53a3b266"
	Sep 20 19:35:03 addons-711398 kubelet[2326]: E0920 19:35:03.509938    2326 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="66a65f15-36bd-442f-9ee9-7d762e51c91c"
	Sep 20 19:35:05 addons-711398 kubelet[2326]: I0920 19:35:05.367133    2326 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/66a65f15-36bd-442f-9ee9-7d762e51c91c-gcp-creds\") pod \"66a65f15-36bd-442f-9ee9-7d762e51c91c\" (UID: \"66a65f15-36bd-442f-9ee9-7d762e51c91c\") "
	Sep 20 19:35:05 addons-711398 kubelet[2326]: I0920 19:35:05.367199    2326 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59kxw\" (UniqueName: \"kubernetes.io/projected/66a65f15-36bd-442f-9ee9-7d762e51c91c-kube-api-access-59kxw\") pod \"66a65f15-36bd-442f-9ee9-7d762e51c91c\" (UID: \"66a65f15-36bd-442f-9ee9-7d762e51c91c\") "
	Sep 20 19:35:05 addons-711398 kubelet[2326]: I0920 19:35:05.368279    2326 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66a65f15-36bd-442f-9ee9-7d762e51c91c-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "66a65f15-36bd-442f-9ee9-7d762e51c91c" (UID: "66a65f15-36bd-442f-9ee9-7d762e51c91c"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 19:35:05 addons-711398 kubelet[2326]: I0920 19:35:05.372650    2326 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a65f15-36bd-442f-9ee9-7d762e51c91c-kube-api-access-59kxw" (OuterVolumeSpecName: "kube-api-access-59kxw") pod "66a65f15-36bd-442f-9ee9-7d762e51c91c" (UID: "66a65f15-36bd-442f-9ee9-7d762e51c91c"). InnerVolumeSpecName "kube-api-access-59kxw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 19:35:05 addons-711398 kubelet[2326]: I0920 19:35:05.468853    2326 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/66a65f15-36bd-442f-9ee9-7d762e51c91c-gcp-creds\") on node \"addons-711398\" DevicePath \"\""
	Sep 20 19:35:05 addons-711398 kubelet[2326]: I0920 19:35:05.468892    2326 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-59kxw\" (UniqueName: \"kubernetes.io/projected/66a65f15-36bd-442f-9ee9-7d762e51c91c-kube-api-access-59kxw\") on node \"addons-711398\" DevicePath \"\""
	Sep 20 19:35:06 addons-711398 kubelet[2326]: I0920 19:35:06.275717    2326 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lftr\" (UniqueName: \"kubernetes.io/projected/d2e45ba0-4b0a-4648-a233-1dfc5982c286-kube-api-access-2lftr\") pod \"d2e45ba0-4b0a-4648-a233-1dfc5982c286\" (UID: \"d2e45ba0-4b0a-4648-a233-1dfc5982c286\") "
	Sep 20 19:35:06 addons-711398 kubelet[2326]: I0920 19:35:06.279817    2326 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2e45ba0-4b0a-4648-a233-1dfc5982c286-kube-api-access-2lftr" (OuterVolumeSpecName: "kube-api-access-2lftr") pod "d2e45ba0-4b0a-4648-a233-1dfc5982c286" (UID: "d2e45ba0-4b0a-4648-a233-1dfc5982c286"). InnerVolumeSpecName "kube-api-access-2lftr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 19:35:06 addons-711398 kubelet[2326]: I0920 19:35:06.376488    2326 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx75b\" (UniqueName: \"kubernetes.io/projected/4fbca207-de93-4adb-baa8-2219f829573b-kube-api-access-wx75b\") pod \"4fbca207-de93-4adb-baa8-2219f829573b\" (UID: \"4fbca207-de93-4adb-baa8-2219f829573b\") "
	Sep 20 19:35:06 addons-711398 kubelet[2326]: I0920 19:35:06.376622    2326 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2lftr\" (UniqueName: \"kubernetes.io/projected/d2e45ba0-4b0a-4648-a233-1dfc5982c286-kube-api-access-2lftr\") on node \"addons-711398\" DevicePath \"\""
	Sep 20 19:35:06 addons-711398 kubelet[2326]: I0920 19:35:06.381046    2326 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fbca207-de93-4adb-baa8-2219f829573b-kube-api-access-wx75b" (OuterVolumeSpecName: "kube-api-access-wx75b") pod "4fbca207-de93-4adb-baa8-2219f829573b" (UID: "4fbca207-de93-4adb-baa8-2219f829573b"). InnerVolumeSpecName "kube-api-access-wx75b". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 19:35:06 addons-711398 kubelet[2326]: I0920 19:35:06.477101    2326 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wx75b\" (UniqueName: \"kubernetes.io/projected/4fbca207-de93-4adb-baa8-2219f829573b-kube-api-access-wx75b\") on node \"addons-711398\" DevicePath \"\""
	Sep 20 19:35:06 addons-711398 kubelet[2326]: I0920 19:35:06.526788    2326 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66a65f15-36bd-442f-9ee9-7d762e51c91c" path="/var/lib/kubelet/pods/66a65f15-36bd-442f-9ee9-7d762e51c91c/volumes"
	Sep 20 19:35:06 addons-711398 kubelet[2326]: I0920 19:35:06.709052    2326 scope.go:117] "RemoveContainer" containerID="c8ddd2e4784afd8f3ed04ab0294c18ddc8d819776f9b2f7cc18c6fb4fd526e2c"
	Sep 20 19:35:06 addons-711398 kubelet[2326]: I0920 19:35:06.760198    2326 scope.go:117] "RemoveContainer" containerID="c8ddd2e4784afd8f3ed04ab0294c18ddc8d819776f9b2f7cc18c6fb4fd526e2c"
	Sep 20 19:35:06 addons-711398 kubelet[2326]: E0920 19:35:06.761419    2326 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: c8ddd2e4784afd8f3ed04ab0294c18ddc8d819776f9b2f7cc18c6fb4fd526e2c" containerID="c8ddd2e4784afd8f3ed04ab0294c18ddc8d819776f9b2f7cc18c6fb4fd526e2c"
	Sep 20 19:35:06 addons-711398 kubelet[2326]: I0920 19:35:06.761634    2326 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c8ddd2e4784afd8f3ed04ab0294c18ddc8d819776f9b2f7cc18c6fb4fd526e2c"} err="failed to get container status \"c8ddd2e4784afd8f3ed04ab0294c18ddc8d819776f9b2f7cc18c6fb4fd526e2c\": rpc error: code = Unknown desc = Error response from daemon: No such container: c8ddd2e4784afd8f3ed04ab0294c18ddc8d819776f9b2f7cc18c6fb4fd526e2c"
	Sep 20 19:35:06 addons-711398 kubelet[2326]: I0920 19:35:06.761743    2326 scope.go:117] "RemoveContainer" containerID="f8330d199113ddd1713739370a5f6157c8a097384c87635281582db2e6df1481"
	Sep 20 19:35:06 addons-711398 kubelet[2326]: I0920 19:35:06.794344    2326 scope.go:117] "RemoveContainer" containerID="f8330d199113ddd1713739370a5f6157c8a097384c87635281582db2e6df1481"
	Sep 20 19:35:06 addons-711398 kubelet[2326]: E0920 19:35:06.796536    2326 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: f8330d199113ddd1713739370a5f6157c8a097384c87635281582db2e6df1481" containerID="f8330d199113ddd1713739370a5f6157c8a097384c87635281582db2e6df1481"
	Sep 20 19:35:06 addons-711398 kubelet[2326]: I0920 19:35:06.796614    2326 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"f8330d199113ddd1713739370a5f6157c8a097384c87635281582db2e6df1481"} err="failed to get container status \"f8330d199113ddd1713739370a5f6157c8a097384c87635281582db2e6df1481\": rpc error: code = Unknown desc = Error response from daemon: No such container: f8330d199113ddd1713739370a5f6157c8a097384c87635281582db2e6df1481"
	
	
	==> storage-provisioner [dcbbde731377] <==
	I0920 19:22:28.186265       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 19:22:28.206714       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 19:22:28.206758       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 19:22:28.224439       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 19:22:28.224690       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"406391ed-d0f5-42dc-b149-3c882314ab08", APIVersion:"v1", ResourceVersion:"531", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-711398_6bec32ad-bee2-4c3b-8880-2b31f1cab225 became leader
	I0920 19:22:28.225338       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-711398_6bec32ad-bee2-4c3b-8880-2b31f1cab225!
	I0920 19:22:28.328206       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-711398_6bec32ad-bee2-4c3b-8880-2b31f1cab225!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-711398 -n addons-711398
helpers_test.go:261: (dbg) Run:  kubectl --context addons-711398 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-7mls9 ingress-nginx-admission-patch-n2285
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-711398 describe pod busybox ingress-nginx-admission-create-7mls9 ingress-nginx-admission-patch-n2285
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-711398 describe pod busybox ingress-nginx-admission-create-7mls9 ingress-nginx-admission-patch-n2285: exit status 1 (103.95304ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-711398/192.168.49.2
	Start Time:       Fri, 20 Sep 2024 19:25:50 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n2g9z (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n2g9z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m18s                  default-scheduler  Successfully assigned default/busybox to addons-711398
	  Normal   Pulling    7m54s (x4 over 9m17s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m54s (x4 over 9m17s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m54s (x4 over 9m17s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m42s (x6 over 9m16s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m7s (x22 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7mls9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-n2285" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-711398 describe pod busybox ingress-nginx-admission-create-7mls9 ingress-nginx-admission-patch-n2285: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.48s)

                                                
                                    

Test pass (318/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.42
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 4.68
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.21
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.57
22 TestOffline 59.79
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 221.42
29 TestAddons/serial/Volcano 41.44
31 TestAddons/serial/GCPAuth/Namespaces 0.19
34 TestAddons/parallel/Ingress 21.51
35 TestAddons/parallel/InspektorGadget 11.81
36 TestAddons/parallel/MetricsServer 5.75
38 TestAddons/parallel/CSI 30.29
39 TestAddons/parallel/Headlamp 17.69
40 TestAddons/parallel/CloudSpanner 5.52
41 TestAddons/parallel/LocalPath 52.55
42 TestAddons/parallel/NvidiaDevicePlugin 6.46
43 TestAddons/parallel/Yakd 10.75
44 TestAddons/StoppedEnableDisable 11.32
45 TestCertOptions 44.43
46 TestCertExpiration 253.94
47 TestDockerFlags 45.23
48 TestForceSystemdFlag 44.89
49 TestForceSystemdEnv 42.23
55 TestErrorSpam/setup 35.76
56 TestErrorSpam/start 0.74
57 TestErrorSpam/status 1.06
58 TestErrorSpam/pause 1.38
59 TestErrorSpam/unpause 1.51
60 TestErrorSpam/stop 11.2
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 48.23
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 31.1
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.1
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.42
72 TestFunctional/serial/CacheCmd/cache/add_local 1
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
77 TestFunctional/serial/CacheCmd/cache/delete 0.13
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
80 TestFunctional/serial/ExtraConfig 45.32
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.2
83 TestFunctional/serial/LogsFileCmd 1.21
84 TestFunctional/serial/InvalidService 5.17
86 TestFunctional/parallel/ConfigCmd 0.44
87 TestFunctional/parallel/DashboardCmd 9.47
88 TestFunctional/parallel/DryRun 0.59
89 TestFunctional/parallel/InternationalLanguage 0.24
90 TestFunctional/parallel/StatusCmd 1.26
94 TestFunctional/parallel/ServiceCmdConnect 10.66
95 TestFunctional/parallel/AddonsCmd 0.19
96 TestFunctional/parallel/PersistentVolumeClaim 30.08
98 TestFunctional/parallel/SSHCmd 0.77
99 TestFunctional/parallel/CpCmd 1.61
101 TestFunctional/parallel/FileSync 0.28
102 TestFunctional/parallel/CertSync 1.65
106 TestFunctional/parallel/NodeLabels 0.08
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.4
110 TestFunctional/parallel/License 0.27
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.59
113 TestFunctional/parallel/Version/short 0.07
114 TestFunctional/parallel/Version/components 1.25
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
119 TestFunctional/parallel/ImageCommands/ImageBuild 3.61
120 TestFunctional/parallel/ImageCommands/Setup 0.64
121 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.33
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.12
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.94
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.2
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.44
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.66
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
131 TestFunctional/parallel/DockerEnv/bash 1
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
141 TestFunctional/parallel/MountCmd/any-port 7.41
142 TestFunctional/parallel/MountCmd/specific-port 1.97
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.77
144 TestFunctional/parallel/ServiceCmd/DeployApp 8.25
145 TestFunctional/parallel/ServiceCmd/List 0.58
146 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
148 TestFunctional/parallel/ProfileCmd/profile_list 0.52
149 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
151 TestFunctional/parallel/ServiceCmd/Format 0.52
152 TestFunctional/parallel/ServiceCmd/URL 0.54
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 123.14
160 TestMultiControlPlane/serial/DeployApp 45.12
161 TestMultiControlPlane/serial/PingHostFromPods 1.72
162 TestMultiControlPlane/serial/AddWorkerNode 28.74
163 TestMultiControlPlane/serial/NodeLabels 0.14
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.1
165 TestMultiControlPlane/serial/CopyFile 19.44
166 TestMultiControlPlane/serial/StopSecondaryNode 11.76
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
168 TestMultiControlPlane/serial/RestartSecondaryNode 77.04
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.07
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 240.55
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.25
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.81
173 TestMultiControlPlane/serial/StopCluster 32.96
174 TestMultiControlPlane/serial/RestartCluster 148.39
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
176 TestMultiControlPlane/serial/AddSecondaryNode 49.34
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.1
180 TestImageBuild/serial/Setup 30.03
181 TestImageBuild/serial/NormalBuild 1.95
182 TestImageBuild/serial/BuildWithBuildArg 1.05
183 TestImageBuild/serial/BuildWithDockerIgnore 0.85
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.92
188 TestJSONOutput/start/Command 40.35
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.62
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.54
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 10.85
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.23
213 TestKicCustomNetwork/create_custom_network 35.29
214 TestKicCustomNetwork/use_default_bridge_network 31.44
215 TestKicExistingNetwork 29.59
216 TestKicCustomSubnet 33.46
217 TestKicStaticIP 34.11
218 TestMainNoArgs 0.06
219 TestMinikubeProfile 68.76
222 TestMountStart/serial/StartWithMountFirst 7.69
223 TestMountStart/serial/VerifyMountFirst 0.26
224 TestMountStart/serial/StartWithMountSecond 7.69
225 TestMountStart/serial/VerifyMountSecond 0.25
226 TestMountStart/serial/DeleteFirst 1.48
227 TestMountStart/serial/VerifyMountPostDelete 0.26
228 TestMountStart/serial/Stop 1.21
229 TestMountStart/serial/RestartStopped 9.07
230 TestMountStart/serial/VerifyMountPostStop 0.25
233 TestMultiNode/serial/FreshStart2Nodes 83.03
234 TestMultiNode/serial/DeployApp2Nodes 46.11
235 TestMultiNode/serial/PingHostFrom2Pods 1.02
236 TestMultiNode/serial/AddNode 18.54
237 TestMultiNode/serial/MultiNodeLabels 0.1
238 TestMultiNode/serial/ProfileList 0.74
239 TestMultiNode/serial/CopyFile 10.33
240 TestMultiNode/serial/StopNode 2.29
241 TestMultiNode/serial/StartAfterStop 11.39
242 TestMultiNode/serial/RestartKeepsNodes 104.52
243 TestMultiNode/serial/DeleteNode 5.69
244 TestMultiNode/serial/StopMultiNode 21.69
245 TestMultiNode/serial/RestartMultiNode 59.41
246 TestMultiNode/serial/ValidateNameConflict 37.06
251 TestPreload 101.73
253 TestScheduledStopUnix 106.84
254 TestSkaffold 118
256 TestInsufficientStorage 11.19
257 TestRunningBinaryUpgrade 108.96
259 TestKubernetesUpgrade 379.51
260 TestMissingContainerUpgrade 170.44
262 TestPause/serial/Start 49.73
263 TestPause/serial/SecondStartNoReconfiguration 39.2
265 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
266 TestNoKubernetes/serial/StartWithK8s 38.41
267 TestPause/serial/Pause 0.89
268 TestPause/serial/VerifyStatus 0.49
269 TestPause/serial/Unpause 0.74
270 TestPause/serial/PauseAgain 1
271 TestPause/serial/DeletePaused 2.39
272 TestPause/serial/VerifyDeletedResources 0.6
284 TestNoKubernetes/serial/StartWithStopK8s 19.97
285 TestNoKubernetes/serial/Start 8.87
286 TestNoKubernetes/serial/VerifyK8sNotRunning 0.36
287 TestNoKubernetes/serial/ProfileList 1.19
288 TestNoKubernetes/serial/Stop 1.31
289 TestNoKubernetes/serial/StartNoArgs 8.85
290 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
291 TestStoppedBinaryUpgrade/Setup 0.88
292 TestStoppedBinaryUpgrade/Upgrade 100.45
293 TestStoppedBinaryUpgrade/MinikubeLogs 2.34
301 TestNetworkPlugins/group/auto/Start 91.17
302 TestNetworkPlugins/group/kindnet/Start 80.11
303 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
304 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
305 TestNetworkPlugins/group/kindnet/NetCatPod 10.29
306 TestNetworkPlugins/group/auto/KubeletFlags 0.31
307 TestNetworkPlugins/group/auto/NetCatPod 10.29
308 TestNetworkPlugins/group/kindnet/DNS 0.2
309 TestNetworkPlugins/group/kindnet/Localhost 0.18
310 TestNetworkPlugins/group/kindnet/HairPin 0.17
311 TestNetworkPlugins/group/auto/DNS 0.2
312 TestNetworkPlugins/group/auto/Localhost 0.19
313 TestNetworkPlugins/group/auto/HairPin 0.2
314 TestNetworkPlugins/group/calico/Start 84.72
315 TestNetworkPlugins/group/custom-flannel/Start 62.48
316 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
317 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.37
318 TestNetworkPlugins/group/custom-flannel/DNS 0.25
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
321 TestNetworkPlugins/group/calico/ControllerPod 6.01
322 TestNetworkPlugins/group/calico/KubeletFlags 0.44
323 TestNetworkPlugins/group/calico/NetCatPod 12.48
324 TestNetworkPlugins/group/calico/DNS 0.32
325 TestNetworkPlugins/group/calico/Localhost 0.3
326 TestNetworkPlugins/group/calico/HairPin 0.28
327 TestNetworkPlugins/group/false/Start 58.5
328 TestNetworkPlugins/group/enable-default-cni/Start 43.46
329 TestNetworkPlugins/group/false/KubeletFlags 0.38
330 TestNetworkPlugins/group/false/NetCatPod 10.36
331 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
332 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.33
333 TestNetworkPlugins/group/false/DNS 0.31
334 TestNetworkPlugins/group/false/Localhost 0.22
335 TestNetworkPlugins/group/false/HairPin 0.31
336 TestNetworkPlugins/group/enable-default-cni/DNS 0.3
337 TestNetworkPlugins/group/enable-default-cni/Localhost 0.27
338 TestNetworkPlugins/group/enable-default-cni/HairPin 0.32
339 TestNetworkPlugins/group/flannel/Start 66.04
340 TestNetworkPlugins/group/bridge/Start 62.05
341 TestNetworkPlugins/group/flannel/ControllerPod 6.01
342 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
343 TestNetworkPlugins/group/flannel/NetCatPod 11.27
344 TestNetworkPlugins/group/bridge/KubeletFlags 0.37
345 TestNetworkPlugins/group/bridge/NetCatPod 10.38
346 TestNetworkPlugins/group/flannel/DNS 0.19
347 TestNetworkPlugins/group/flannel/Localhost 0.17
348 TestNetworkPlugins/group/flannel/HairPin 0.17
349 TestNetworkPlugins/group/bridge/DNS 0.21
350 TestNetworkPlugins/group/bridge/Localhost 0.17
351 TestNetworkPlugins/group/bridge/HairPin 0.17
352 TestNetworkPlugins/group/kubenet/Start 73.73
354 TestStartStop/group/old-k8s-version/serial/FirstStart 157.59
355 TestNetworkPlugins/group/kubenet/KubeletFlags 0.3
356 TestNetworkPlugins/group/kubenet/NetCatPod 11.37
357 TestNetworkPlugins/group/kubenet/DNS 0.2
358 TestNetworkPlugins/group/kubenet/Localhost 0.19
359 TestNetworkPlugins/group/kubenet/HairPin 0.16
361 TestStartStop/group/no-preload/serial/FirstStart 80.24
362 TestStartStop/group/old-k8s-version/serial/DeployApp 10.55
363 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.19
364 TestStartStop/group/old-k8s-version/serial/Stop 11.07
365 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
366 TestStartStop/group/old-k8s-version/serial/SecondStart 124.55
367 TestStartStop/group/no-preload/serial/DeployApp 9.51
368 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.63
369 TestStartStop/group/no-preload/serial/Stop 11.13
370 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
371 TestStartStop/group/no-preload/serial/SecondStart 268.27
372 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
373 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
374 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
375 TestStartStop/group/old-k8s-version/serial/Pause 2.88
377 TestStartStop/group/embed-certs/serial/FirstStart 47.88
378 TestStartStop/group/embed-certs/serial/DeployApp 9.34
379 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
380 TestStartStop/group/embed-certs/serial/Stop 10.89
381 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
382 TestStartStop/group/embed-certs/serial/SecondStart 268.02
383 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
384 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
386 TestStartStop/group/no-preload/serial/Pause 2.9
388 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 72.57
389 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.35
390 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
391 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.08
392 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
393 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.53
394 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
395 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
396 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
397 TestStartStop/group/embed-certs/serial/Pause 2.92
399 TestStartStop/group/newest-cni/serial/FirstStart 36.88
400 TestStartStop/group/newest-cni/serial/DeployApp 0
401 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.27
402 TestStartStop/group/newest-cni/serial/Stop 9.64
403 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
404 TestStartStop/group/newest-cni/serial/SecondStart 17.63
405 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
406 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
407 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
408 TestStartStop/group/newest-cni/serial/Pause 2.93
409 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
410 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
411 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
412 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.76
x
+
TestDownloadOnly/v1.20.0/json-events (13.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-164565 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-164565 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (13.424199021s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (13.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 19:21:20.444816  722379 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0920 19:21:20.444905  722379 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-715609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-164565
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-164565: exit status 85 (65.73367ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-164565 | jenkins | v1.34.0 | 20 Sep 24 19:21 UTC |          |
	|         | -p download-only-164565        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:21:07
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:21:07.065008  722384 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:21:07.065383  722384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:21:07.065415  722384 out.go:358] Setting ErrFile to fd 2...
	I0920 19:21:07.065436  722384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:21:07.065874  722384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-715609/.minikube/bin
	W0920 19:21:07.066071  722384 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19678-715609/.minikube/config/config.json: open /home/jenkins/minikube-integration/19678-715609/.minikube/config/config.json: no such file or directory
	I0920 19:21:07.066527  722384 out.go:352] Setting JSON to true
	I0920 19:21:07.067400  722384 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11018,"bootTime":1726849049,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0920 19:21:07.067535  722384 start.go:139] virtualization:  
	I0920 19:21:07.069776  722384 out.go:97] [download-only-164565] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0920 19:21:07.070266  722384 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19678-715609/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 19:21:07.070318  722384 notify.go:220] Checking for updates...
	I0920 19:21:07.071379  722384 out.go:169] MINIKUBE_LOCATION=19678
	I0920 19:21:07.072901  722384 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:21:07.074688  722384 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19678-715609/kubeconfig
	I0920 19:21:07.076027  722384 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-715609/.minikube
	I0920 19:21:07.077509  722384 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0920 19:21:07.079954  722384 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 19:21:07.080214  722384 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:21:07.102318  722384 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:21:07.102427  722384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:21:07.158407  722384 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 19:21:07.1482446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarch
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:21:07.158523  722384 docker.go:318] overlay module found
	I0920 19:21:07.159961  722384 out.go:97] Using the docker driver based on user configuration
	I0920 19:21:07.159984  722384 start.go:297] selected driver: docker
	I0920 19:21:07.159992  722384 start.go:901] validating driver "docker" against <nil>
	I0920 19:21:07.160108  722384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:21:07.218187  722384 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 19:21:07.208142294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:21:07.218391  722384 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 19:21:07.218655  722384 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0920 19:21:07.218804  722384 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 19:21:07.220356  722384 out.go:169] Using Docker driver with root privileges
	I0920 19:21:07.222012  722384 cni.go:84] Creating CNI manager for ""
	I0920 19:21:07.222081  722384 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 19:21:07.222166  722384 start.go:340] cluster config:
	{Name:download-only-164565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-164565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:21:07.223750  722384 out.go:97] Starting "download-only-164565" primary control-plane node in "download-only-164565" cluster
	I0920 19:21:07.223771  722384 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 19:21:07.225457  722384 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0920 19:21:07.225482  722384 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 19:21:07.225775  722384 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 19:21:07.242334  722384 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 19:21:07.242505  722384 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 19:21:07.242602  722384 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 19:21:07.394958  722384 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 19:21:07.395000  722384 cache.go:56] Caching tarball of preloaded images
	I0920 19:21:07.395835  722384 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 19:21:07.397710  722384 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 19:21:07.397731  722384 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 19:21:07.485816  722384 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19678-715609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 19:21:11.397036  722384 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 19:21:11.397152  722384 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19678-715609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 19:21:12.428056  722384 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 19:21:12.428520  722384 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/download-only-164565/config.json ...
	I0920 19:21:12.428557  722384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/download-only-164565/config.json: {Name:mk157fe4246cfab461c1981c11071540b98f48c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:21:12.429351  722384 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 19:21:12.430103  722384 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19678-715609/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-164565 host does not exist
	  To start a cluster, run: "minikube start -p download-only-164565"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-164565
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (4.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-090878 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-090878 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.677179588s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (4.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 19:21:25.522611  722379 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 19:21:25.522648  722379 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-715609/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-090878
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-090878: exit status 85 (63.944646ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-164565 | jenkins | v1.34.0 | 20 Sep 24 19:21 UTC |                     |
	|         | -p download-only-164565        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 19:21 UTC | 20 Sep 24 19:21 UTC |
	| delete  | -p download-only-164565        | download-only-164565 | jenkins | v1.34.0 | 20 Sep 24 19:21 UTC | 20 Sep 24 19:21 UTC |
	| start   | -o=json --download-only        | download-only-090878 | jenkins | v1.34.0 | 20 Sep 24 19:21 UTC |                     |
	|         | -p download-only-090878        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:21:20
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:21:20.886888  722588 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:21:20.887082  722588 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:21:20.887096  722588 out.go:358] Setting ErrFile to fd 2...
	I0920 19:21:20.887102  722588 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:21:20.887390  722588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-715609/.minikube/bin
	I0920 19:21:20.887849  722588 out.go:352] Setting JSON to true
	I0920 19:21:20.888792  722588 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":11032,"bootTime":1726849049,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0920 19:21:20.888880  722588 start.go:139] virtualization:  
	I0920 19:21:20.891300  722588 out.go:97] [download-only-090878] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 19:21:20.891567  722588 notify.go:220] Checking for updates...
	I0920 19:21:20.893418  722588 out.go:169] MINIKUBE_LOCATION=19678
	I0920 19:21:20.895243  722588 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:21:20.896459  722588 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19678-715609/kubeconfig
	I0920 19:21:20.897958  722588 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-715609/.minikube
	I0920 19:21:20.899304  722588 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0920 19:21:20.901911  722588 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 19:21:20.902164  722588 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:21:20.928163  722588 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:21:20.928285  722588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:21:20.989977  722588 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 19:21:20.97923312 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:21:20.990088  722588 docker.go:318] overlay module found
	I0920 19:21:20.991598  722588 out.go:97] Using the docker driver based on user configuration
	I0920 19:21:20.991621  722588 start.go:297] selected driver: docker
	I0920 19:21:20.991628  722588 start.go:901] validating driver "docker" against <nil>
	I0920 19:21:20.991731  722588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:21:21.040235  722588 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 19:21:21.029932108 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:21:21.040441  722588 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 19:21:21.040751  722588 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0920 19:21:21.040914  722588 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 19:21:21.042529  722588 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-090878 host does not exist
	  To start a cluster, run: "minikube start -p download-only-090878"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-090878
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 19:21:26.711297  722379 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-896186 --alsologtostderr --binary-mirror http://127.0.0.1:43279 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-896186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-896186
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (59.79s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-204930 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-204930 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (57.624913234s)
helpers_test.go:175: Cleaning up "offline-docker-204930" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-204930
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-204930: (2.161823292s)
--- PASS: TestOffline (59.79s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-711398
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-711398: exit status 85 (71.169287ms)

                                                
                                                
-- stdout --
	* Profile "addons-711398" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-711398"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-711398
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-711398: exit status 85 (65.05178ms)

                                                
                                                
-- stdout --
	* Profile "addons-711398" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-711398"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (221.42s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-711398 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-711398 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m41.422041874s)
--- PASS: TestAddons/Setup (221.42s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.44s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:843: volcano-admission stabilized in 58.303706ms
addons_test.go:851: volcano-controller stabilized in 58.423835ms
addons_test.go:835: volcano-scheduler stabilized in 59.096139ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-zm2fb" [25937651-c818-4ff8-9dfa-8d9ac29d5570] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003901656s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-fsrpm" [c0324b81-0853-4fbd-9369-e6c9e7dbad34] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004744446s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-c55bg" [47b8b51b-3c92-4cca-8253-93e743d30d2c] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005004895s
addons_test.go:870: (dbg) Run:  kubectl --context addons-711398 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-711398 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-711398 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [795d3458-db0e-43d9-b2bc-0be2e333f021] Pending
helpers_test.go:344: "test-job-nginx-0" [795d3458-db0e-43d9-b2bc-0be2e333f021] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [795d3458-db0e-43d9-b2bc-0be2e333f021] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003680439s
addons_test.go:906: (dbg) Run:  out/minikube-linux-arm64 -p addons-711398 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-arm64 -p addons-711398 addons disable volcano --alsologtostderr -v=1: (10.781526127s)
--- PASS: TestAddons/serial/Volcano (41.44s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-711398 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-711398 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-711398 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-711398 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-711398 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [105dba5b-0e8f-4e6d-a15d-37507e3f6098] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [105dba5b-0e8f-4e6d-a15d-37507e3f6098] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004032453s
I0920 19:35:48.823021  722379 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-711398 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-711398 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-711398 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-711398 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-711398 addons disable ingress-dns --alsologtostderr -v=1: (1.162974168s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-711398 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-711398 addons disable ingress --alsologtostderr -v=1: (7.731507588s)
--- PASS: TestAddons/parallel/Ingress (21.51s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hp9tl" [53daf043-7f53-406d-ac9d-0815258bc4b1] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003807234s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-711398
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-711398: (5.808130753s)
--- PASS: TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 3.629893ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-cwvt9" [2c61ac4c-97fc-4401-96cb-98c474378544] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003845426s
addons_test.go:413: (dbg) Run:  kubectl --context addons-711398 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-711398 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.75s)

                                                
                                    
x
+
TestAddons/parallel/CSI (30.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0920 19:34:04.524635  722379 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 19:34:04.529818  722379 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 19:34:04.529847  722379 kapi.go:107] duration metric: took 10.573962ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 10.583127ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-711398 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711398 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711398 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-711398 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [69357d41-3487-4be6-ac0f-583ba8153f4e] Pending
helpers_test.go:344: "task-pv-pod" [69357d41-3487-4be6-ac0f-583ba8153f4e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [69357d41-3487-4be6-ac0f-583ba8153f4e] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003790389s
addons_test.go:528: (dbg) Run:  kubectl --context addons-711398 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-711398 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-711398 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-711398 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-711398 delete pod task-pv-pod: (1.388000547s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-711398 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-711398 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711398 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-711398 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [ef0a2b0b-a751-49c4-a89a-c9efa8bfc3ad] Pending
helpers_test.go:344: "task-pv-pod-restore" [ef0a2b0b-a751-49c4-a89a-c9efa8bfc3ad] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [ef0a2b0b-a751-49c4-a89a-c9efa8bfc3ad] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004172348s
addons_test.go:570: (dbg) Run:  kubectl --context addons-711398 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-711398 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-711398 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-711398 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-711398 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.71243623s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-711398 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (30.29s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-711398 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-711398 --alsologtostderr -v=1: (1.002598667s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-z4g2r" [8cc20fb1-1cd5-4469-8066-99da01330d8a] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-z4g2r" [8cc20fb1-1cd5-4469-8066-99da01330d8a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-z4g2r" [8cc20fb1-1cd5-4469-8066-99da01330d8a] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003581426s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-711398 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-711398 addons disable headlamp --alsologtostderr -v=1: (5.68340114s)
--- PASS: TestAddons/parallel/Headlamp (17.69s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-g4968" [c272c531-7535-44d0-8581-32aeb3d0dd70] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003191524s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-711398
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.55s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-711398 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-711398 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711398 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711398 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711398 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711398 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711398 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d47b0adf-f86e-432d-bc3e-8ad9dd1b6da2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d47b0adf-f86e-432d-bc3e-8ad9dd1b6da2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d47b0adf-f86e-432d-bc3e-8ad9dd1b6da2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00405264s
addons_test.go:938: (dbg) Run:  kubectl --context addons-711398 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-711398 ssh "cat /opt/local-path-provisioner/pvc-9a9bf7c2-70be-4ebd-8920-1988957db53e_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-711398 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-711398 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-711398 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-arm64 -p addons-711398 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.356622747s)
--- PASS: TestAddons/parallel/LocalPath (52.55s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.46s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-wqj2f" [706a55de-ce14-438b-bd2d-4793efdd30e7] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003646463s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-711398
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.46s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-d8f58" [e64b741c-6bf6-4e00-adca-0542ba6b9405] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.008327241s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-711398 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-711398 addons disable yakd --alsologtostderr -v=1: (5.736528759s)
--- PASS: TestAddons/parallel/Yakd (10.75s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-711398
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-711398: (11.045548302s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-711398
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-711398
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-711398
--- PASS: TestAddons/StoppedEnableDisable (11.32s)

                                                
                                    
x
+
TestCertOptions (44.43s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-301113 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-301113 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (41.103069003s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-301113 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-301113 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-301113 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-301113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-301113
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-301113: (2.311387286s)
--- PASS: TestCertOptions (44.43s)

                                                
                                    
x
+
TestCertExpiration (253.94s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-652195 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-652195 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (45.177980608s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-652195 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-652195 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (26.514877552s)
helpers_test.go:175: Cleaning up "cert-expiration-652195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-652195
E0920 20:19:23.402025  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-652195: (2.24533093s)
--- PASS: TestCertExpiration (253.94s)

                                                
                                    
x
+
TestDockerFlags (45.23s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-185535 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-185535 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.606413493s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-185535 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-185535 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-185535" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-185535
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-185535: (2.854011536s)
--- PASS: TestDockerFlags (45.23s)

                                                
                                    
x
+
TestForceSystemdFlag (44.89s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-238284 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-238284 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.697506126s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-238284 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-238284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-238284
E0920 20:15:08.802350  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-238284: (2.466501853s)
--- PASS: TestForceSystemdFlag (44.89s)

                                                
                                    
x
+
TestForceSystemdEnv (42.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-111328 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-111328 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.354878236s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-111328 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-111328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-111328
E0920 20:14:26.875554  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-111328: (2.414553228s)
--- PASS: TestForceSystemdEnv (42.23s)

                                                
                                    
x
+
TestErrorSpam/setup (35.76s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-289977 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-289977 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-289977 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-289977 --driver=docker  --container-runtime=docker: (35.762203691s)
--- PASS: TestErrorSpam/setup (35.76s)

                                                
                                    
x
+
TestErrorSpam/start (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-289977 --log_dir /tmp/nospam-289977 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-289977 --log_dir /tmp/nospam-289977 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-289977 --log_dir /tmp/nospam-289977 start --dry-run
--- PASS: TestErrorSpam/start (0.74s)

                                                
                                    
x
+
TestErrorSpam/status (1.06s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-289977 --log_dir /tmp/nospam-289977 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-289977 --log_dir /tmp/nospam-289977 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-289977 --log_dir /tmp/nospam-289977 status
--- PASS: TestErrorSpam/status (1.06s)

                                                
                                    
x
+
TestErrorSpam/pause (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-289977 --log_dir /tmp/nospam-289977 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-289977 --log_dir /tmp/nospam-289977 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-289977 --log_dir /tmp/nospam-289977 pause
--- PASS: TestErrorSpam/pause (1.38s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-289977 --log_dir /tmp/nospam-289977 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-289977 --log_dir /tmp/nospam-289977 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-289977 --log_dir /tmp/nospam-289977 unpause
--- PASS: TestErrorSpam/unpause (1.51s)

                                                
                                    
x
+
TestErrorSpam/stop (11.2s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-289977 --log_dir /tmp/nospam-289977 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-289977 --log_dir /tmp/nospam-289977 stop: (11.006165948s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-289977 --log_dir /tmp/nospam-289977 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-289977 --log_dir /tmp/nospam-289977 stop
--- PASS: TestErrorSpam/stop (11.20s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19678-715609/.minikube/files/etc/test/nested/copy/722379/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.23s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-087953 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-087953 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (48.230479909s)
--- PASS: TestFunctional/serial/StartWithProxy (48.23s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31.1s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 19:37:54.824385  722379 config.go:182] Loaded profile config "functional-087953": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-087953 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-087953 --alsologtostderr -v=8: (31.099325256s)
functional_test.go:663: soft start took 31.101963952s for "functional-087953" cluster.
I0920 19:38:25.924061  722379 config.go:182] Loaded profile config "functional-087953": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (31.10s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-087953 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-087953 cache add registry.k8s.io/pause:3.1: (1.19455564s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-087953 cache add registry.k8s.io/pause:3.3: (1.212219919s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-087953 cache add registry.k8s.io/pause:latest: (1.015485921s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-087953 /tmp/TestFunctionalserialCacheCmdcacheadd_local1468680407/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 cache add minikube-local-cache-test:functional-087953
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 cache delete minikube-local-cache-test:functional-087953
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-087953
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-087953 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (295.240771ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 kubectl -- --context functional-087953 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-087953 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-087953 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-087953 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.315036669s)
functional_test.go:761: restart took 45.315138091s for "functional-087953" cluster.
I0920 19:39:18.326925  722379 config.go:182] Loaded profile config "functional-087953": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (45.32s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-087953 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-087953 logs: (1.202005807s)
--- PASS: TestFunctional/serial/LogsCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 logs --file /tmp/TestFunctionalserialLogsFileCmd4123046767/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-087953 logs --file /tmp/TestFunctionalserialLogsFileCmd4123046767/001/logs.txt: (1.211905001s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.17s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-087953 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-087953
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-087953: exit status 115 (645.930514ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32219 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-087953 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-087953 delete -f testdata/invalidsvc.yaml: (1.272576481s)
--- PASS: TestFunctional/serial/InvalidService (5.17s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-087953 config get cpus: exit status 14 (66.230567ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-087953 config get cpus: exit status 14 (100.768647ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-087953 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-087953 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 766076: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-087953 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-087953 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (238.417456ms)

                                                
                                                
-- stdout --
	* [functional-087953] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-715609/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-715609/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:40:09.604186  765388 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:40:09.604427  765388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:40:09.604441  765388 out.go:358] Setting ErrFile to fd 2...
	I0920 19:40:09.604448  765388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:40:09.604862  765388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-715609/.minikube/bin
	I0920 19:40:09.605574  765388 out.go:352] Setting JSON to false
	I0920 19:40:09.607079  765388 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":12161,"bootTime":1726849049,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0920 19:40:09.607273  765388 start.go:139] virtualization:  
	I0920 19:40:09.610926  765388 out.go:177] * [functional-087953] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 19:40:09.614442  765388 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 19:40:09.614600  765388 notify.go:220] Checking for updates...
	I0920 19:40:09.620545  765388 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:40:09.628217  765388 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-715609/kubeconfig
	I0920 19:40:09.630939  765388 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-715609/.minikube
	I0920 19:40:09.634038  765388 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 19:40:09.636947  765388 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:40:09.640168  765388 config.go:182] Loaded profile config "functional-087953": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 19:40:09.640843  765388 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:40:09.677947  765388 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:40:09.678130  765388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:40:09.763724  765388 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 19:40:09.751011243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:40:09.763835  765388 docker.go:318] overlay module found
	I0920 19:40:09.766703  765388 out.go:177] * Using the docker driver based on existing profile
	I0920 19:40:09.769581  765388 start.go:297] selected driver: docker
	I0920 19:40:09.769603  765388 start.go:901] validating driver "docker" against &{Name:functional-087953 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-087953 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:40:09.769722  765388 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:40:09.773014  765388 out.go:201] 
	W0920 19:40:09.775932  765388 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 19:40:09.778715  765388 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-087953 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-087953 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-087953 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (240.118739ms)

                                                
                                                
-- stdout --
	* [functional-087953] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-715609/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-715609/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:40:10.407119  765624 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:40:10.407332  765624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:40:10.407354  765624 out.go:358] Setting ErrFile to fd 2...
	I0920 19:40:10.407372  765624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:40:10.412918  765624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-715609/.minikube/bin
	I0920 19:40:10.413478  765624 out.go:352] Setting JSON to false
	I0920 19:40:10.414562  765624 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":12162,"bootTime":1726849049,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0920 19:40:10.414674  765624 start.go:139] virtualization:  
	I0920 19:40:10.417894  765624 out.go:177] * [functional-087953] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0920 19:40:10.421196  765624 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 19:40:10.421303  765624 notify.go:220] Checking for updates...
	I0920 19:40:10.426395  765624 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:40:10.429189  765624 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-715609/kubeconfig
	I0920 19:40:10.431770  765624 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-715609/.minikube
	I0920 19:40:10.434332  765624 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 19:40:10.437021  765624 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:40:10.440308  765624 config.go:182] Loaded profile config "functional-087953": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 19:40:10.440884  765624 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:40:10.489807  765624 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:40:10.490031  765624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:40:10.565255  765624 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 19:40:10.554898746 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:40:10.565363  765624 docker.go:318] overlay module found
	I0920 19:40:10.571390  765624 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0920 19:40:10.574283  765624 start.go:297] selected driver: docker
	I0920 19:40:10.574312  765624 start.go:901] validating driver "docker" against &{Name:functional-087953 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-087953 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:40:10.574425  765624 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:40:10.577571  765624 out.go:201] 
	W0920 19:40:10.580227  765624 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 19:40:10.582901  765624 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 status -o json
E0920 19:40:11.377228  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-087953 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-087953 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-xwhkl" [979e1495-26de-461a-8075-d89b4b44704f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-xwhkl" [979e1495-26de-461a-8075-d89b4b44704f] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004525543s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31120
functional_test.go:1675: http://192.168.49.2:31120: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-xwhkl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31120
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e888de94-d80c-4808-a07c-d54a1788fbfb] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003384874s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-087953 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-087953 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-087953 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-087953 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b451b46a-fd10-4f69-8f5c-599612faa1fc] Pending
helpers_test.go:344: "sp-pod" [b451b46a-fd10-4f69-8f5c-599612faa1fc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b451b46a-fd10-4f69-8f5c-599612faa1fc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.00430018s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-087953 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-087953 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-087953 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [63bf3955-ee40-497d-bbac-38189bf8258a] Pending
helpers_test.go:344: "sp-pod" [63bf3955-ee40-497d-bbac-38189bf8258a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [63bf3955-ee40-497d-bbac-38189bf8258a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.005867284s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-087953 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.08s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh -n functional-087953 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 cp functional-087953:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3946599624/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh -n functional-087953 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh -n functional-087953 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/722379/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh "sudo cat /etc/test/nested/copy/722379/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/722379.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh "sudo cat /etc/ssl/certs/722379.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/722379.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh "sudo cat /usr/share/ca-certificates/722379.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/7223792.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh "sudo cat /etc/ssl/certs/7223792.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/7223792.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh "sudo cat /usr/share/ca-certificates/7223792.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-087953 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-087953 ssh "sudo systemctl is-active crio": exit status 1 (400.248828ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-087953 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-087953 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-087953 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-087953 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 760266: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-087953 version -o=json --components: (1.246560909s)
--- PASS: TestFunctional/parallel/Version/components (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-087953 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-087953
docker.io/kicbase/echo-server:functional-087953
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-087953 image ls --format short --alsologtostderr:
I0920 19:40:12.780673  766095 out.go:345] Setting OutFile to fd 1 ...
I0920 19:40:12.783872  766095 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:40:12.783907  766095 out.go:358] Setting ErrFile to fd 2...
I0920 19:40:12.783945  766095 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:40:12.784360  766095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-715609/.minikube/bin
I0920 19:40:12.785244  766095 config.go:182] Loaded profile config "functional-087953": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 19:40:12.785422  766095 config.go:182] Loaded profile config "functional-087953": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 19:40:12.786077  766095 cli_runner.go:164] Run: docker container inspect functional-087953 --format={{.State.Status}}
I0920 19:40:12.811820  766095 ssh_runner.go:195] Run: systemctl --version
I0920 19:40:12.811940  766095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-087953
I0920 19:40:12.842572  766095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/functional-087953/id_rsa Username:docker}
I0920 19:40:12.945475  766095 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-087953 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kicbase/echo-server               | functional-087953 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| localhost/my-image                          | functional-087953 | 9c3f817046dca | 1.41MB |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| docker.io/library/minikube-local-cache-test | functional-087953 | 4c59f77ae849a | 30B    |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-087953 image ls --format table --alsologtostderr:
I0920 19:40:17.171269  766630 out.go:345] Setting OutFile to fd 1 ...
I0920 19:40:17.171519  766630 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:40:17.171546  766630 out.go:358] Setting ErrFile to fd 2...
I0920 19:40:17.171566  766630 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:40:17.171865  766630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-715609/.minikube/bin
I0920 19:40:17.172650  766630 config.go:182] Loaded profile config "functional-087953": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 19:40:17.172832  766630 config.go:182] Loaded profile config "functional-087953": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 19:40:17.173357  766630 cli_runner.go:164] Run: docker container inspect functional-087953 --format={{.State.Status}}
I0920 19:40:17.201670  766630 ssh_runner.go:195] Run: systemctl --version
I0920 19:40:17.201725  766630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-087953
I0920 19:40:17.223690  766630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/functional-087953/id_rsa Username:docker}
I0920 19:40:17.321001  766630 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-087953 image ls --format json --alsologtostderr:
[{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"9c3f817046dca262f09e448b2e375d7aaaec5f004034ea277ad65946d194a759","repoDigests":[],"repoTags":["localhost/my-image:functional-087953"],"size":"1410000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"7f8aa378bb47dffcf430f3a601
abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":[
"registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-087953"],"size":"4780000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"4c59f77ae849a2974699f1fbc43ae9e3566c6c15a1077ced0edf6b4dd0d0d252","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-087953"],"size":"30"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-087953 image ls --format json --alsologtostderr:
I0920 19:40:16.930059  766596 out.go:345] Setting OutFile to fd 1 ...
I0920 19:40:16.930460  766596 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:40:16.930469  766596 out.go:358] Setting ErrFile to fd 2...
I0920 19:40:16.930475  766596 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:40:16.930744  766596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-715609/.minikube/bin
I0920 19:40:16.931417  766596 config.go:182] Loaded profile config "functional-087953": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 19:40:16.931524  766596 config.go:182] Loaded profile config "functional-087953": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 19:40:16.932007  766596 cli_runner.go:164] Run: docker container inspect functional-087953 --format={{.State.Status}}
I0920 19:40:16.954022  766596 ssh_runner.go:195] Run: systemctl --version
I0920 19:40:16.954082  766596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-087953
I0920 19:40:16.972458  766596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/functional-087953/id_rsa Username:docker}
I0920 19:40:17.068981  766596 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-087953 image ls --format yaml --alsologtostderr:
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-087953
size: "4780000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 4c59f77ae849a2974699f1fbc43ae9e3566c6c15a1077ced0edf6b4dd0d0d252
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-087953
size: "30"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-087953 image ls --format yaml --alsologtostderr:
I0920 19:40:13.049966  766215 out.go:345] Setting OutFile to fd 1 ...
I0920 19:40:13.050214  766215 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:40:13.050243  766215 out.go:358] Setting ErrFile to fd 2...
I0920 19:40:13.050264  766215 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:40:13.050580  766215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-715609/.minikube/bin
I0920 19:40:13.051315  766215 config.go:182] Loaded profile config "functional-087953": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 19:40:13.051493  766215 config.go:182] Loaded profile config "functional-087953": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 19:40:13.052043  766215 cli_runner.go:164] Run: docker container inspect functional-087953 --format={{.State.Status}}
I0920 19:40:13.084168  766215 ssh_runner.go:195] Run: systemctl --version
I0920 19:40:13.084245  766215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-087953
I0920 19:40:13.109853  766215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/functional-087953/id_rsa Username:docker}
I0920 19:40:13.217513  766215 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-087953 ssh pgrep buildkitd: exit status 1 (277.904385ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 image build -t localhost/my-image:functional-087953 testdata/build --alsologtostderr
E0920 19:40:13.939160  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-087953 image build -t localhost/my-image:functional-087953 testdata/build --alsologtostderr: (3.09074397s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-087953 image build -t localhost/my-image:functional-087953 testdata/build --alsologtostderr:
I0920 19:40:13.606220  766430 out.go:345] Setting OutFile to fd 1 ...
I0920 19:40:13.607567  766430 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:40:13.607645  766430 out.go:358] Setting ErrFile to fd 2...
I0920 19:40:13.607671  766430 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:40:13.608011  766430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-715609/.minikube/bin
I0920 19:40:13.609326  766430 config.go:182] Loaded profile config "functional-087953": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 19:40:13.610769  766430 config.go:182] Loaded profile config "functional-087953": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 19:40:13.611365  766430 cli_runner.go:164] Run: docker container inspect functional-087953 --format={{.State.Status}}
I0920 19:40:13.643745  766430 ssh_runner.go:195] Run: systemctl --version
I0920 19:40:13.643801  766430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-087953
I0920 19:40:13.661775  766430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/functional-087953/id_rsa Username:docker}
I0920 19:40:13.769213  766430 build_images.go:161] Building image from path: /tmp/build.4078508182.tar
I0920 19:40:13.769282  766430 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 19:40:13.779552  766430 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4078508182.tar
I0920 19:40:13.783227  766430 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4078508182.tar: stat -c "%s %y" /var/lib/minikube/build/build.4078508182.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4078508182.tar': No such file or directory
I0920 19:40:13.783257  766430 ssh_runner.go:362] scp /tmp/build.4078508182.tar --> /var/lib/minikube/build/build.4078508182.tar (3072 bytes)
I0920 19:40:13.818463  766430 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4078508182
I0920 19:40:13.828495  766430 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4078508182 -xf /var/lib/minikube/build/build.4078508182.tar
I0920 19:40:13.839833  766430 docker.go:360] Building image: /var/lib/minikube/build/build.4078508182
I0920 19:40:13.839904  766430 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-087953 /var/lib/minikube/build/build.4078508182
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:9c3f817046dca262f09e448b2e375d7aaaec5f004034ea277ad65946d194a759 done
#8 naming to localhost/my-image:functional-087953 done
#8 DONE 0.0s
I0920 19:40:16.603518  766430 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-087953 /var/lib/minikube/build/build.4078508182: (2.76358205s)
I0920 19:40:16.603590  766430 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4078508182
I0920 19:40:16.613405  766430 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4078508182.tar
I0920 19:40:16.622945  766430 build_images.go:217] Built localhost/my-image:functional-087953 from /tmp/build.4078508182.tar
I0920 19:40:16.622978  766430 build_images.go:133] succeeded building to: functional-087953
I0920 19:40:16.622984  766430 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-087953
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-087953 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-087953 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [2cfe79cd-911d-4ec1-be90-dfd59a4647ee] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [2cfe79cd-911d-4ec1-be90-dfd59a4647ee] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004066237s
I0920 19:39:37.207720  722379 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 image load --daemon kicbase/echo-server:functional-087953 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 image load --daemon kicbase/echo-server:functional-087953 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-087953
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 image load --daemon kicbase/echo-server:functional-087953 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 image save kicbase/echo-server:functional-087953 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 image rm kicbase/echo-server:functional-087953 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-087953
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 image save --daemon kicbase/echo-server:functional-087953 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-087953
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-087953 docker-env) && out/minikube-linux-arm64 status -p functional-087953"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-087953 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 update-context --alsologtostderr -v=2
E0920 19:40:19.060418  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
2024/09/20 19:40:19 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-087953 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.156.240 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-087953 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-087953 /tmp/TestFunctionalparallelMountCmdany-port2101457780/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726861177420979929" to /tmp/TestFunctionalparallelMountCmdany-port2101457780/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726861177420979929" to /tmp/TestFunctionalparallelMountCmdany-port2101457780/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726861177420979929" to /tmp/TestFunctionalparallelMountCmdany-port2101457780/001/test-1726861177420979929
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-087953 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (424.060178ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 19:39:37.845315  722379 retry.go:31] will retry after 692.711363ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 19:39 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 19:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 19:39 test-1726861177420979929
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh cat /mount-9p/test-1726861177420979929
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-087953 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2c79b582-36af-4b78-af65-32fcfacab0a7] Pending
helpers_test.go:344: "busybox-mount" [2c79b582-36af-4b78-af65-32fcfacab0a7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [2c79b582-36af-4b78-af65-32fcfacab0a7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2c79b582-36af-4b78-af65-32fcfacab0a7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003923161s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-087953 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-087953 /tmp/TestFunctionalparallelMountCmdany-port2101457780/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-087953 /tmp/TestFunctionalparallelMountCmdspecific-port1478723693/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-087953 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (456.913514ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 19:39:45.288919  722379 retry.go:31] will retry after 264.907054ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-087953 /tmp/TestFunctionalparallelMountCmdspecific-port1478723693/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-087953 ssh "sudo umount -f /mount-9p": exit status 1 (399.021638ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-087953 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-087953 /tmp/TestFunctionalparallelMountCmdspecific-port1478723693/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-087953 /tmp/TestFunctionalparallelMountCmdVerifyCleanup884817799/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-087953 /tmp/TestFunctionalparallelMountCmdVerifyCleanup884817799/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-087953 /tmp/TestFunctionalparallelMountCmdVerifyCleanup884817799/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-087953 ssh "findmnt -T" /mount1: (1.012148463s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-087953 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-087953 /tmp/TestFunctionalparallelMountCmdVerifyCleanup884817799/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-087953 /tmp/TestFunctionalparallelMountCmdVerifyCleanup884817799/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-087953 /tmp/TestFunctionalparallelMountCmdVerifyCleanup884817799/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-087953 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-087953 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-br6fg" [4d224b5a-f13c-407c-bc1a-6ef8e14928db] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-br6fg" [4d224b5a-f13c-407c-bc1a-6ef8e14928db] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004006928s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 service list -o json
functional_test.go:1494: Took "552.039124ms" to run "out/minikube-linux-arm64 -p functional-087953 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "427.563511ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
E0920 19:40:08.967540  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1329: Took "95.025148ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 service --namespace=default --https --url hello-node
E0920 19:40:08.803702  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:40:08.810132  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:40:08.821563  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:40:08.842924  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:40:08.884689  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1522: found endpoint: https://192.168.49.2:31730
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
E0920 19:40:09.129883  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1366: Took "443.478786ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "78.559731ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 service hello-node --url --format={{.IP}}
E0920 19:40:09.452716  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-087953 service hello-node --url
E0920 19:40:10.095312  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31730
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-087953
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-087953
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-087953
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (123.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-002327 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0920 19:40:29.302455  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:40:49.784676  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:41:30.746312  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-002327 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m2.260811218s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (123.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (45.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-002327 -- rollout status deployment/busybox: (5.240704213s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0920 19:42:31.652695  722379 retry.go:31] will retry after 857.930046ms: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0920 19:42:32.666872  722379 retry.go:31] will retry after 1.722967402s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0920 19:42:34.550170  722379 retry.go:31] will retry after 2.602784998s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0920 19:42:37.335068  722379 retry.go:31] will retry after 4.876484728s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0920 19:42:42.392044  722379 retry.go:31] will retry after 3.841399045s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0920 19:42:46.408135  722379 retry.go:31] will retry after 6.701903405s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
E0920 19:42:52.668302  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0920 19:42:53.285359  722379 retry.go:31] will retry after 14.882722749s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- exec busybox-7dff88458-2mkjb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- exec busybox-7dff88458-gtrhd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- exec busybox-7dff88458-mh4nr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- exec busybox-7dff88458-2mkjb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- exec busybox-7dff88458-gtrhd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- exec busybox-7dff88458-mh4nr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- exec busybox-7dff88458-2mkjb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- exec busybox-7dff88458-gtrhd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- exec busybox-7dff88458-mh4nr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (45.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- exec busybox-7dff88458-2mkjb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- exec busybox-7dff88458-2mkjb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- exec busybox-7dff88458-gtrhd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- exec busybox-7dff88458-gtrhd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- exec busybox-7dff88458-mh4nr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-002327 -- exec busybox-7dff88458-mh4nr -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (28.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-002327 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-002327 -v=7 --alsologtostderr: (27.709728883s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-002327 status -v=7 --alsologtostderr: (1.027236772s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (28.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-002327 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.104032544s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-002327 status --output json -v=7 --alsologtostderr: (1.003622159s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 cp testdata/cp-test.txt ha-002327:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 cp ha-002327:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile712372115/001/cp-test_ha-002327.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 cp ha-002327:/home/docker/cp-test.txt ha-002327-m02:/home/docker/cp-test_ha-002327_ha-002327-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m02 "sudo cat /home/docker/cp-test_ha-002327_ha-002327-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 cp ha-002327:/home/docker/cp-test.txt ha-002327-m03:/home/docker/cp-test_ha-002327_ha-002327-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m03 "sudo cat /home/docker/cp-test_ha-002327_ha-002327-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 cp ha-002327:/home/docker/cp-test.txt ha-002327-m04:/home/docker/cp-test_ha-002327_ha-002327-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m04 "sudo cat /home/docker/cp-test_ha-002327_ha-002327-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 cp testdata/cp-test.txt ha-002327-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 cp ha-002327-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile712372115/001/cp-test_ha-002327-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 cp ha-002327-m02:/home/docker/cp-test.txt ha-002327:/home/docker/cp-test_ha-002327-m02_ha-002327.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327 "sudo cat /home/docker/cp-test_ha-002327-m02_ha-002327.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 cp ha-002327-m02:/home/docker/cp-test.txt ha-002327-m03:/home/docker/cp-test_ha-002327-m02_ha-002327-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m03 "sudo cat /home/docker/cp-test_ha-002327-m02_ha-002327-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 cp ha-002327-m02:/home/docker/cp-test.txt ha-002327-m04:/home/docker/cp-test_ha-002327-m02_ha-002327-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m04 "sudo cat /home/docker/cp-test_ha-002327-m02_ha-002327-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 cp testdata/cp-test.txt ha-002327-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 cp ha-002327-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile712372115/001/cp-test_ha-002327-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 cp ha-002327-m03:/home/docker/cp-test.txt ha-002327:/home/docker/cp-test_ha-002327-m03_ha-002327.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327 "sudo cat /home/docker/cp-test_ha-002327-m03_ha-002327.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 cp ha-002327-m03:/home/docker/cp-test.txt ha-002327-m02:/home/docker/cp-test_ha-002327-m03_ha-002327-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m02 "sudo cat /home/docker/cp-test_ha-002327-m03_ha-002327-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 cp ha-002327-m03:/home/docker/cp-test.txt ha-002327-m04:/home/docker/cp-test_ha-002327-m03_ha-002327-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m04 "sudo cat /home/docker/cp-test_ha-002327-m03_ha-002327-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 cp testdata/cp-test.txt ha-002327-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 cp ha-002327-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile712372115/001/cp-test_ha-002327-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 cp ha-002327-m04:/home/docker/cp-test.txt ha-002327:/home/docker/cp-test_ha-002327-m04_ha-002327.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327 "sudo cat /home/docker/cp-test_ha-002327-m04_ha-002327.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 cp ha-002327-m04:/home/docker/cp-test.txt ha-002327-m02:/home/docker/cp-test_ha-002327-m04_ha-002327-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m02 "sudo cat /home/docker/cp-test_ha-002327-m04_ha-002327-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 cp ha-002327-m04:/home/docker/cp-test.txt ha-002327-m03:/home/docker/cp-test_ha-002327-m04_ha-002327-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 ssh -n ha-002327-m03 "sudo cat /home/docker/cp-test_ha-002327-m04_ha-002327-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-002327 node stop m02 -v=7 --alsologtostderr: (10.991693993s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-002327 status -v=7 --alsologtostderr: exit status 7 (770.04025ms)

                                                
                                                
-- stdout --
	ha-002327
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-002327-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-002327-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-002327-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:44:13.228622  789519 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:44:13.228771  789519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:44:13.228783  789519 out.go:358] Setting ErrFile to fd 2...
	I0920 19:44:13.228788  789519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:44:13.229041  789519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-715609/.minikube/bin
	I0920 19:44:13.229215  789519 out.go:352] Setting JSON to false
	I0920 19:44:13.229259  789519 mustload.go:65] Loading cluster: ha-002327
	I0920 19:44:13.229309  789519 notify.go:220] Checking for updates...
	I0920 19:44:13.229684  789519 config.go:182] Loaded profile config "ha-002327": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 19:44:13.229705  789519 status.go:174] checking status of ha-002327 ...
	I0920 19:44:13.230299  789519 cli_runner.go:164] Run: docker container inspect ha-002327 --format={{.State.Status}}
	I0920 19:44:13.251719  789519 status.go:364] ha-002327 host status = "Running" (err=<nil>)
	I0920 19:44:13.251744  789519 host.go:66] Checking if "ha-002327" exists ...
	I0920 19:44:13.252067  789519 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-002327
	I0920 19:44:13.279877  789519 host.go:66] Checking if "ha-002327" exists ...
	I0920 19:44:13.280198  789519 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:44:13.280255  789519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-002327
	I0920 19:44:13.299533  789519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/ha-002327/id_rsa Username:docker}
	I0920 19:44:13.415284  789519 ssh_runner.go:195] Run: systemctl --version
	I0920 19:44:13.420856  789519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:44:13.433839  789519 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:44:13.487503  789519 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-20 19:44:13.477291517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:44:13.488397  789519 kubeconfig.go:125] found "ha-002327" server: "https://192.168.49.254:8443"
	I0920 19:44:13.488447  789519 api_server.go:166] Checking apiserver status ...
	I0920 19:44:13.488502  789519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:44:13.501976  789519 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2333/cgroup
	I0920 19:44:13.511785  789519 api_server.go:182] apiserver freezer: "13:freezer:/docker/c607a90de2ea22d333ccf10ce9bbac754f05923f4d594afeafc00e02201d4bbd/kubepods/burstable/pod25788302631069b22418f34d6fc72598/a321aa830d2d51fba967f827dce30a59c61e9bf6c2eb8b012d793a54db94d185"
	I0920 19:44:13.511868  789519 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c607a90de2ea22d333ccf10ce9bbac754f05923f4d594afeafc00e02201d4bbd/kubepods/burstable/pod25788302631069b22418f34d6fc72598/a321aa830d2d51fba967f827dce30a59c61e9bf6c2eb8b012d793a54db94d185/freezer.state
	I0920 19:44:13.521118  789519 api_server.go:204] freezer state: "THAWED"
	I0920 19:44:13.521146  789519 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 19:44:13.529149  789519 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 19:44:13.529178  789519 status.go:456] ha-002327 apiserver status = Running (err=<nil>)
	I0920 19:44:13.529189  789519 status.go:176] ha-002327 status: &{Name:ha-002327 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:44:13.529229  789519 status.go:174] checking status of ha-002327-m02 ...
	I0920 19:44:13.529577  789519 cli_runner.go:164] Run: docker container inspect ha-002327-m02 --format={{.State.Status}}
	I0920 19:44:13.552148  789519 status.go:364] ha-002327-m02 host status = "Stopped" (err=<nil>)
	I0920 19:44:13.552169  789519 status.go:377] host is not running, skipping remaining checks
	I0920 19:44:13.552176  789519 status.go:176] ha-002327-m02 status: &{Name:ha-002327-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:44:13.552195  789519 status.go:174] checking status of ha-002327-m03 ...
	I0920 19:44:13.552494  789519 cli_runner.go:164] Run: docker container inspect ha-002327-m03 --format={{.State.Status}}
	I0920 19:44:13.577660  789519 status.go:364] ha-002327-m03 host status = "Running" (err=<nil>)
	I0920 19:44:13.577699  789519 host.go:66] Checking if "ha-002327-m03" exists ...
	I0920 19:44:13.577997  789519 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-002327-m03
	I0920 19:44:13.593686  789519 host.go:66] Checking if "ha-002327-m03" exists ...
	I0920 19:44:13.593998  789519 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:44:13.594051  789519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-002327-m03
	I0920 19:44:13.611053  789519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32795 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/ha-002327-m03/id_rsa Username:docker}
	I0920 19:44:13.713134  789519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:44:13.724999  789519 kubeconfig.go:125] found "ha-002327" server: "https://192.168.49.254:8443"
	I0920 19:44:13.725029  789519 api_server.go:166] Checking apiserver status ...
	I0920 19:44:13.725100  789519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:44:13.736467  789519 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2199/cgroup
	I0920 19:44:13.746004  789519 api_server.go:182] apiserver freezer: "13:freezer:/docker/7b835b48e3048d48357333a73e5ae1e91bd935f531bc4131b2cdcbc23614ef98/kubepods/burstable/pod11e13205fe6a2c5c0ef351ff7e7085d4/58d5a5ab382a0d2315b06a83b0dae4b0496a1df3828d5fd9093c95ae09ee88e9"
	I0920 19:44:13.746108  789519 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7b835b48e3048d48357333a73e5ae1e91bd935f531bc4131b2cdcbc23614ef98/kubepods/burstable/pod11e13205fe6a2c5c0ef351ff7e7085d4/58d5a5ab382a0d2315b06a83b0dae4b0496a1df3828d5fd9093c95ae09ee88e9/freezer.state
	I0920 19:44:13.754703  789519 api_server.go:204] freezer state: "THAWED"
	I0920 19:44:13.754736  789519 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 19:44:13.762490  789519 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 19:44:13.762527  789519 status.go:456] ha-002327-m03 apiserver status = Running (err=<nil>)
	I0920 19:44:13.762537  789519 status.go:176] ha-002327-m03 status: &{Name:ha-002327-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:44:13.762559  789519 status.go:174] checking status of ha-002327-m04 ...
	I0920 19:44:13.762867  789519 cli_runner.go:164] Run: docker container inspect ha-002327-m04 --format={{.State.Status}}
	I0920 19:44:13.789291  789519 status.go:364] ha-002327-m04 host status = "Running" (err=<nil>)
	I0920 19:44:13.789314  789519 host.go:66] Checking if "ha-002327-m04" exists ...
	I0920 19:44:13.789643  789519 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-002327-m04
	I0920 19:44:13.806599  789519 host.go:66] Checking if "ha-002327-m04" exists ...
	I0920 19:44:13.806997  789519 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:44:13.807040  789519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-002327-m04
	I0920 19:44:13.827421  789519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32800 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/ha-002327-m04/id_rsa Username:docker}
	I0920 19:44:13.925754  789519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:44:13.947706  789519 status.go:176] ha-002327-m04 status: &{Name:ha-002327-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (77.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 node start m02 -v=7 --alsologtostderr
E0920 19:44:26.877010  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:44:26.883963  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:44:26.895507  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:44:26.916883  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:44:26.958412  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:44:27.039996  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:44:27.201512  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:44:27.523178  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:44:28.164841  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:44:29.446094  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:44:32.008275  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:44:37.130062  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:44:47.372366  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:45:07.853634  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:45:08.802334  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-002327 node start m02 -v=7 --alsologtostderr: (1m15.927200454s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-002327 status -v=7 --alsologtostderr: (1.017834772s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (77.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.065313868s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (240.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-002327 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-002327 -v=7 --alsologtostderr
E0920 19:45:36.509752  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:45:48.815301  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-002327 -v=7 --alsologtostderr: (33.881221799s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-002327 --wait=true -v=7 --alsologtostderr
E0920 19:47:10.737268  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:49:26.874957  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-002327 --wait=true -v=7 --alsologtostderr: (3m26.499984817s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-002327
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (240.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-002327 node delete m03 -v=7 --alsologtostderr: (10.287381061s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 stop -v=7 --alsologtostderr
E0920 19:49:54.583749  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:50:08.802569  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-002327 stop -v=7 --alsologtostderr: (32.840535323s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-002327 status -v=7 --alsologtostderr: exit status 7 (123.618826ms)

                                                
                                                
-- stdout --
	ha-002327
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-002327-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-002327-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:50:18.351365  817368 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:50:18.351496  817368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:50:18.351504  817368 out.go:358] Setting ErrFile to fd 2...
	I0920 19:50:18.351509  817368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:50:18.351768  817368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-715609/.minikube/bin
	I0920 19:50:18.351948  817368 out.go:352] Setting JSON to false
	I0920 19:50:18.351984  817368 mustload.go:65] Loading cluster: ha-002327
	I0920 19:50:18.352112  817368 notify.go:220] Checking for updates...
	I0920 19:50:18.352448  817368 config.go:182] Loaded profile config "ha-002327": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 19:50:18.352464  817368 status.go:174] checking status of ha-002327 ...
	I0920 19:50:18.353292  817368 cli_runner.go:164] Run: docker container inspect ha-002327 --format={{.State.Status}}
	I0920 19:50:18.375366  817368 status.go:364] ha-002327 host status = "Stopped" (err=<nil>)
	I0920 19:50:18.375391  817368 status.go:377] host is not running, skipping remaining checks
	I0920 19:50:18.375399  817368 status.go:176] ha-002327 status: &{Name:ha-002327 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:50:18.375443  817368 status.go:174] checking status of ha-002327-m02 ...
	I0920 19:50:18.375771  817368 cli_runner.go:164] Run: docker container inspect ha-002327-m02 --format={{.State.Status}}
	I0920 19:50:18.409424  817368 status.go:364] ha-002327-m02 host status = "Stopped" (err=<nil>)
	I0920 19:50:18.409448  817368 status.go:377] host is not running, skipping remaining checks
	I0920 19:50:18.409456  817368 status.go:176] ha-002327-m02 status: &{Name:ha-002327-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:50:18.409475  817368 status.go:174] checking status of ha-002327-m04 ...
	I0920 19:50:18.409790  817368 cli_runner.go:164] Run: docker container inspect ha-002327-m04 --format={{.State.Status}}
	I0920 19:50:18.428939  817368 status.go:364] ha-002327-m04 host status = "Stopped" (err=<nil>)
	I0920 19:50:18.428964  817368 status.go:377] host is not running, skipping remaining checks
	I0920 19:50:18.428972  817368 status.go:176] ha-002327-m04 status: &{Name:ha-002327-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (148.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-002327 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-002327 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m27.459249981s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (148.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (49.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-002327 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-002327 --control-plane -v=7 --alsologtostderr: (48.338710165s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-002327 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-002327 status -v=7 --alsologtostderr: (1.002361962s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (49.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.097243919s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.10s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (30.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-795018 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-795018 --driver=docker  --container-runtime=docker: (30.034461049s)
--- PASS: TestImageBuild/serial/Setup (30.03s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-795018
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-795018: (1.94757398s)
--- PASS: TestImageBuild/serial/NormalBuild (1.95s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-795018
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-795018: (1.050091709s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.05s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.85s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-795018
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.85s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-795018
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.35s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-818598 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0920 19:54:26.876205  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-818598 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (40.341276984s)
--- PASS: TestJSONOutput/start/Command (40.35s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-818598 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-818598 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-818598 --output=json --user=testUser
E0920 19:55:08.802318  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-818598 --output=json --user=testUser: (10.846450744s)
--- PASS: TestJSONOutput/stop/Command (10.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-628771 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-628771 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (83.365377ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9ed723e2-6051-4d17-92dd-b204d85f9303","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-628771] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ed539371-b635-41d5-9ece-8b6d131956cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19678"}}
	{"specversion":"1.0","id":"ff5a8f4d-8c6d-4315-a772-2cb1ca50d04e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"82f78302-38e3-4a2e-9ea7-2023be0f9672","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19678-715609/kubeconfig"}}
	{"specversion":"1.0","id":"1771b144-1afb-4b43-9960-2db8029b2be7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-715609/.minikube"}}
	{"specversion":"1.0","id":"66c278ed-4d6b-47ed-a4dd-fa8a6893c7ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"02163d0a-b60d-4788-9929-3df48a0d6e9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a357ed22-df74-4505-ac44-1a50f0695680","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-628771" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-628771
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-539564 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-539564 --network=: (33.213803177s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-539564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-539564
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-539564: (2.060030176s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.29s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.44s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-989943 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-989943 --network=bridge: (29.41108147s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-989943" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-989943
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-989943: (2.010271835s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.44s)

                                                
                                    
x
+
TestKicExistingNetwork (29.59s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0920 19:56:25.272104  722379 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0920 19:56:25.287197  722379 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0920 19:56:25.288158  722379 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0920 19:56:25.288810  722379 cli_runner.go:164] Run: docker network inspect existing-network
W0920 19:56:25.304679  722379 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0920 19:56:25.304709  722379 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0920 19:56:25.304726  722379 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0920 19:56:25.304851  722379 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0920 19:56:25.322019  722379 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f5f47f05cede IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:9b:59:b1:f0} reservation:<nil>}
I0920 19:56:25.322917  722379 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001cab110}
I0920 19:56:25.322947  722379 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0920 19:56:25.323002  722379 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0920 19:56:25.393947  722379 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-969414 --network=existing-network
E0920 19:56:31.872251  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-969414 --network=existing-network: (27.807591942s)
helpers_test.go:175: Cleaning up "existing-network-969414" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-969414
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-969414: (1.628770089s)
I0920 19:56:54.849272  722379 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (29.59s)

                                                
                                    
x
+
TestKicCustomSubnet (33.46s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-480794 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-480794 --subnet=192.168.60.0/24: (31.304629607s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-480794 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-480794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-480794
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-480794: (2.136473716s)
--- PASS: TestKicCustomSubnet (33.46s)

                                                
                                    
x
+
TestKicStaticIP (34.11s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-452624 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-452624 --static-ip=192.168.200.200: (31.695872189s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-452624 ip
helpers_test.go:175: Cleaning up "static-ip-452624" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-452624
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-452624: (2.081280832s)
--- PASS: TestKicStaticIP (34.11s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (68.76s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-942412 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-942412 --driver=docker  --container-runtime=docker: (31.094419637s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-945005 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-945005 --driver=docker  --container-runtime=docker: (32.152609935s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-942412
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-945005
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-945005" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-945005
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-945005: (2.054395588s)
helpers_test.go:175: Cleaning up "first-942412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-942412
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-942412: (2.127998044s)
--- PASS: TestMinikubeProfile (68.76s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-137856 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-137856 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.685959918s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-137856 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-139768 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-139768 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.693164648s)
E0920 19:59:26.875067  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMountStart/serial/StartWithMountSecond (7.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-139768 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-137856 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-137856 --alsologtostderr -v=5: (1.475326876s)
--- PASS: TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-139768 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-139768
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-139768: (1.208562792s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (9.07s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-139768
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-139768: (8.06554973s)
--- PASS: TestMountStart/serial/RestartStopped (9.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-139768 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (83.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-678976 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0920 20:00:08.803340  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:00:49.945248  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-678976 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m22.350570531s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (83.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (46.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-678976 -- rollout status deployment/busybox: (3.801951735s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 20:01:08.286046  722379 retry.go:31] will retry after 534.598132ms: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 20:01:08.969530  722379 retry.go:31] will retry after 1.610550682s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 20:01:10.737537  722379 retry.go:31] will retry after 1.942260987s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 20:01:12.834589  722379 retry.go:31] will retry after 5.032023087s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 20:01:18.036248  722379 retry.go:31] will retry after 4.726661828s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 20:01:22.927987  722379 retry.go:31] will retry after 9.444969024s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 20:01:32.519964  722379 retry.go:31] will retry after 15.984402907s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- exec busybox-7dff88458-p2hl7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- exec busybox-7dff88458-vvdbh -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- exec busybox-7dff88458-p2hl7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- exec busybox-7dff88458-vvdbh -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- exec busybox-7dff88458-p2hl7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- exec busybox-7dff88458-vvdbh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (46.11s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- exec busybox-7dff88458-p2hl7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- exec busybox-7dff88458-p2hl7 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- exec busybox-7dff88458-vvdbh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-678976 -- exec busybox-7dff88458-vvdbh -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-678976 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-678976 -v 3 --alsologtostderr: (17.744002121s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.54s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-678976 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 cp testdata/cp-test.txt multinode-678976:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 ssh -n multinode-678976 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 cp multinode-678976:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile597844834/001/cp-test_multinode-678976.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 ssh -n multinode-678976 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 cp multinode-678976:/home/docker/cp-test.txt multinode-678976-m02:/home/docker/cp-test_multinode-678976_multinode-678976-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 ssh -n multinode-678976 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 ssh -n multinode-678976-m02 "sudo cat /home/docker/cp-test_multinode-678976_multinode-678976-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 cp multinode-678976:/home/docker/cp-test.txt multinode-678976-m03:/home/docker/cp-test_multinode-678976_multinode-678976-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 ssh -n multinode-678976 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 ssh -n multinode-678976-m03 "sudo cat /home/docker/cp-test_multinode-678976_multinode-678976-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 cp testdata/cp-test.txt multinode-678976-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 ssh -n multinode-678976-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 cp multinode-678976-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile597844834/001/cp-test_multinode-678976-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 ssh -n multinode-678976-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 cp multinode-678976-m02:/home/docker/cp-test.txt multinode-678976:/home/docker/cp-test_multinode-678976-m02_multinode-678976.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 ssh -n multinode-678976-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 ssh -n multinode-678976 "sudo cat /home/docker/cp-test_multinode-678976-m02_multinode-678976.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 cp multinode-678976-m02:/home/docker/cp-test.txt multinode-678976-m03:/home/docker/cp-test_multinode-678976-m02_multinode-678976-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 ssh -n multinode-678976-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 ssh -n multinode-678976-m03 "sudo cat /home/docker/cp-test_multinode-678976-m02_multinode-678976-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 cp testdata/cp-test.txt multinode-678976-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 ssh -n multinode-678976-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 cp multinode-678976-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile597844834/001/cp-test_multinode-678976-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 ssh -n multinode-678976-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 cp multinode-678976-m03:/home/docker/cp-test.txt multinode-678976:/home/docker/cp-test_multinode-678976-m03_multinode-678976.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 ssh -n multinode-678976-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 ssh -n multinode-678976 "sudo cat /home/docker/cp-test_multinode-678976-m03_multinode-678976.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 cp multinode-678976-m03:/home/docker/cp-test.txt multinode-678976-m02:/home/docker/cp-test_multinode-678976-m03_multinode-678976-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 ssh -n multinode-678976-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 ssh -n multinode-678976-m02 "sudo cat /home/docker/cp-test_multinode-678976-m03_multinode-678976-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-678976 node stop m03: (1.219744226s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-678976 status: exit status 7 (537.161673ms)

                                                
                                                
-- stdout --
	multinode-678976
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-678976-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-678976-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-678976 status --alsologtostderr: exit status 7 (537.250474ms)

                                                
                                                
-- stdout --
	multinode-678976
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-678976-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-678976-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 20:02:22.738025  892500 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:02:22.738179  892500 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:02:22.738191  892500 out.go:358] Setting ErrFile to fd 2...
	I0920 20:02:22.738197  892500 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:02:22.738444  892500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-715609/.minikube/bin
	I0920 20:02:22.738625  892500 out.go:352] Setting JSON to false
	I0920 20:02:22.738663  892500 mustload.go:65] Loading cluster: multinode-678976
	I0920 20:02:22.738921  892500 notify.go:220] Checking for updates...
	I0920 20:02:22.739788  892500 config.go:182] Loaded profile config "multinode-678976": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 20:02:22.739816  892500 status.go:174] checking status of multinode-678976 ...
	I0920 20:02:22.740509  892500 cli_runner.go:164] Run: docker container inspect multinode-678976 --format={{.State.Status}}
	I0920 20:02:22.758556  892500 status.go:364] multinode-678976 host status = "Running" (err=<nil>)
	I0920 20:02:22.758581  892500 host.go:66] Checking if "multinode-678976" exists ...
	I0920 20:02:22.758898  892500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-678976
	I0920 20:02:22.780488  892500 host.go:66] Checking if "multinode-678976" exists ...
	I0920 20:02:22.780811  892500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 20:02:22.780873  892500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-678976
	I0920 20:02:22.798171  892500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32910 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/multinode-678976/id_rsa Username:docker}
	I0920 20:02:22.901596  892500 ssh_runner.go:195] Run: systemctl --version
	I0920 20:02:22.906063  892500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 20:02:22.918183  892500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 20:02:22.972862  892500 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-20 20:02:22.962820923 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 20:02:22.973527  892500 kubeconfig.go:125] found "multinode-678976" server: "https://192.168.67.2:8443"
	I0920 20:02:22.973622  892500 api_server.go:166] Checking apiserver status ...
	I0920 20:02:22.973695  892500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:02:22.987348  892500 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2246/cgroup
	I0920 20:02:22.997511  892500 api_server.go:182] apiserver freezer: "13:freezer:/docker/d20b3642b3f6a8a44cd09b8e6adb4b0ac5d94d9816e55f4caf118c2c5550e86b/kubepods/burstable/pod6588fbd10f7dfa9caf42b749100c1e81/2da61d28a585e9382fd46da3e377a21812cd90ad85107c575cad29a609f82a45"
	I0920 20:02:22.997596  892500 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d20b3642b3f6a8a44cd09b8e6adb4b0ac5d94d9816e55f4caf118c2c5550e86b/kubepods/burstable/pod6588fbd10f7dfa9caf42b749100c1e81/2da61d28a585e9382fd46da3e377a21812cd90ad85107c575cad29a609f82a45/freezer.state
	I0920 20:02:23.008629  892500 api_server.go:204] freezer state: "THAWED"
	I0920 20:02:23.008670  892500 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0920 20:02:23.016515  892500 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0920 20:02:23.016557  892500 status.go:456] multinode-678976 apiserver status = Running (err=<nil>)
	I0920 20:02:23.016568  892500 status.go:176] multinode-678976 status: &{Name:multinode-678976 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 20:02:23.016585  892500 status.go:174] checking status of multinode-678976-m02 ...
	I0920 20:02:23.016909  892500 cli_runner.go:164] Run: docker container inspect multinode-678976-m02 --format={{.State.Status}}
	I0920 20:02:23.037774  892500 status.go:364] multinode-678976-m02 host status = "Running" (err=<nil>)
	I0920 20:02:23.037811  892500 host.go:66] Checking if "multinode-678976-m02" exists ...
	I0920 20:02:23.038173  892500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-678976-m02
	I0920 20:02:23.055993  892500 host.go:66] Checking if "multinode-678976-m02" exists ...
	I0920 20:02:23.056358  892500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 20:02:23.056410  892500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-678976-m02
	I0920 20:02:23.076119  892500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32915 SSHKeyPath:/home/jenkins/minikube-integration/19678-715609/.minikube/machines/multinode-678976-m02/id_rsa Username:docker}
	I0920 20:02:23.177654  892500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 20:02:23.190529  892500 status.go:176] multinode-678976-m02 status: &{Name:multinode-678976-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0920 20:02:23.190566  892500 status.go:174] checking status of multinode-678976-m03 ...
	I0920 20:02:23.190889  892500 cli_runner.go:164] Run: docker container inspect multinode-678976-m03 --format={{.State.Status}}
	I0920 20:02:23.208767  892500 status.go:364] multinode-678976-m03 host status = "Stopped" (err=<nil>)
	I0920 20:02:23.208792  892500 status.go:377] host is not running, skipping remaining checks
	I0920 20:02:23.208800  892500 status.go:176] multinode-678976-m03 status: &{Name:multinode-678976-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-678976 node start m03 -v=7 --alsologtostderr: (10.590154267s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (104.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-678976
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-678976
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-678976: (22.641625724s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-678976 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-678976 --wait=true -v=8 --alsologtostderr: (1m21.744083887s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-678976
--- PASS: TestMultiNode/serial/RestartKeepsNodes (104.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-678976 node delete m03: (4.985613894s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 stop
E0920 20:04:26.875500  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-678976 stop: (21.512874696s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-678976 status: exit status 7 (87.766017ms)

                                                
                                                
-- stdout --
	multinode-678976
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-678976-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-678976 status --alsologtostderr: exit status 7 (93.544869ms)

                                                
                                                
-- stdout --
	multinode-678976
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-678976-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 20:04:46.462468  906064 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:04:46.462632  906064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:04:46.462642  906064 out.go:358] Setting ErrFile to fd 2...
	I0920 20:04:46.462649  906064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:04:46.462890  906064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-715609/.minikube/bin
	I0920 20:04:46.463078  906064 out.go:352] Setting JSON to false
	I0920 20:04:46.463116  906064 mustload.go:65] Loading cluster: multinode-678976
	I0920 20:04:46.463224  906064 notify.go:220] Checking for updates...
	I0920 20:04:46.463534  906064 config.go:182] Loaded profile config "multinode-678976": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 20:04:46.463547  906064 status.go:174] checking status of multinode-678976 ...
	I0920 20:04:46.464432  906064 cli_runner.go:164] Run: docker container inspect multinode-678976 --format={{.State.Status}}
	I0920 20:04:46.480802  906064 status.go:364] multinode-678976 host status = "Stopped" (err=<nil>)
	I0920 20:04:46.480828  906064 status.go:377] host is not running, skipping remaining checks
	I0920 20:04:46.480836  906064 status.go:176] multinode-678976 status: &{Name:multinode-678976 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 20:04:46.480864  906064 status.go:174] checking status of multinode-678976-m02 ...
	I0920 20:04:46.481169  906064 cli_runner.go:164] Run: docker container inspect multinode-678976-m02 --format={{.State.Status}}
	I0920 20:04:46.506887  906064 status.go:364] multinode-678976-m02 host status = "Stopped" (err=<nil>)
	I0920 20:04:46.506913  906064 status.go:377] host is not running, skipping remaining checks
	I0920 20:04:46.506920  906064 status.go:176] multinode-678976-m02 status: &{Name:multinode-678976-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (59.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-678976 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0920 20:05:08.802790  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-678976 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (58.665933922s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-678976 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (59.41s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-678976
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-678976-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-678976-m02 --driver=docker  --container-runtime=docker: exit status 14 (108.23042ms)

                                                
                                                
-- stdout --
	* [multinode-678976-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-715609/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-715609/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-678976-m02' is duplicated with machine name 'multinode-678976-m02' in profile 'multinode-678976'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-678976-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-678976-m03 --driver=docker  --container-runtime=docker: (34.524426739s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-678976
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-678976: exit status 80 (316.088102ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-678976 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-678976-m03 already exists in multinode-678976-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-678976-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-678976-m03: (2.057449836s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.06s)

                                                
                                    
x
+
TestPreload (101.73s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-161816 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-161816 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m4.487530756s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-161816 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-161816 image pull gcr.io/k8s-minikube/busybox: (2.174278113s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-161816
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-161816: (11.009014488s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-161816 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-161816 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (21.527662595s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-161816 image list
helpers_test.go:175: Cleaning up "test-preload-161816" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-161816
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-161816: (2.243290684s)
--- PASS: TestPreload (101.73s)

                                                
                                    
x
+
TestScheduledStopUnix (106.84s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-154577 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-154577 --memory=2048 --driver=docker  --container-runtime=docker: (33.627603081s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-154577 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-154577 -n scheduled-stop-154577
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-154577 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0920 20:08:42.857682  722379 retry.go:31] will retry after 119.007µs: open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/scheduled-stop-154577/pid: no such file or directory
I0920 20:08:42.858213  722379 retry.go:31] will retry after 172.246µs: open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/scheduled-stop-154577/pid: no such file or directory
I0920 20:08:42.859357  722379 retry.go:31] will retry after 307.029µs: open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/scheduled-stop-154577/pid: no such file or directory
I0920 20:08:42.860498  722379 retry.go:31] will retry after 194.381µs: open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/scheduled-stop-154577/pid: no such file or directory
I0920 20:08:42.861652  722379 retry.go:31] will retry after 660.445µs: open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/scheduled-stop-154577/pid: no such file or directory
I0920 20:08:42.862755  722379 retry.go:31] will retry after 813.532µs: open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/scheduled-stop-154577/pid: no such file or directory
I0920 20:08:42.863829  722379 retry.go:31] will retry after 1.453058ms: open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/scheduled-stop-154577/pid: no such file or directory
I0920 20:08:42.865990  722379 retry.go:31] will retry after 1.755417ms: open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/scheduled-stop-154577/pid: no such file or directory
I0920 20:08:42.870328  722379 retry.go:31] will retry after 3.499016ms: open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/scheduled-stop-154577/pid: no such file or directory
I0920 20:08:42.874252  722379 retry.go:31] will retry after 2.305471ms: open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/scheduled-stop-154577/pid: no such file or directory
I0920 20:08:42.881732  722379 retry.go:31] will retry after 6.340987ms: open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/scheduled-stop-154577/pid: no such file or directory
I0920 20:08:42.888204  722379 retry.go:31] will retry after 11.871807ms: open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/scheduled-stop-154577/pid: no such file or directory
I0920 20:08:42.900383  722379 retry.go:31] will retry after 14.753909ms: open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/scheduled-stop-154577/pid: no such file or directory
I0920 20:08:42.915608  722379 retry.go:31] will retry after 24.875454ms: open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/scheduled-stop-154577/pid: no such file or directory
I0920 20:08:42.940794  722379 retry.go:31] will retry after 20.068875ms: open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/scheduled-stop-154577/pid: no such file or directory
I0920 20:08:42.961965  722379 retry.go:31] will retry after 64.476827ms: open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/scheduled-stop-154577/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-154577 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-154577 -n scheduled-stop-154577
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-154577
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-154577 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0920 20:09:26.876238  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-154577
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-154577: exit status 7 (67.746903ms)

                                                
                                                
-- stdout --
	scheduled-stop-154577
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-154577 -n scheduled-stop-154577
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-154577 -n scheduled-stop-154577: exit status 7 (64.539659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-154577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-154577
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-154577: (1.643181369s)
--- PASS: TestScheduledStopUnix (106.84s)

                                                
                                    
x
+
TestSkaffold (118s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe4122982034 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-055219 --memory=2600 --driver=docker  --container-runtime=docker
E0920 20:10:08.802548  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-055219 --memory=2600 --driver=docker  --container-runtime=docker: (31.125699111s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe4122982034 run --minikube-profile skaffold-055219 --kube-context skaffold-055219 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe4122982034 run --minikube-profile skaffold-055219 --kube-context skaffold-055219 --status-check=true --port-forward=false --interactive=false: (1m11.599723022s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-78b55bbd87-4qbgp" [f53b8ecc-f265-48eb-b0bd-a27a156f8093] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004571768s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7d4dff9fb5-2nqz4" [dd168d89-f227-4ad6-b90d-a3e8748fcc92] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003394103s
helpers_test.go:175: Cleaning up "skaffold-055219" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-055219
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-055219: (2.988575768s)
--- PASS: TestSkaffold (118.00s)

                                                
                                    
x
+
TestInsufficientStorage (11.19s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-523230 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-523230 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.917134782s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5ece993b-8e78-477c-8f1c-58a263fc9cd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-523230] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"542b254b-b542-452b-ae61-e3d0fa1fb775","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19678"}}
	{"specversion":"1.0","id":"c98ca7bf-cf88-4079-a823-d71db26fdf36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"17c157bc-d1cf-46fe-9be3-d6330c4ed6eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19678-715609/kubeconfig"}}
	{"specversion":"1.0","id":"3d187480-7e7a-4b19-b612-83f65d6117fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-715609/.minikube"}}
	{"specversion":"1.0","id":"49058366-1b85-42d0-bf46-ede570ffa5ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"731490c9-7112-499b-927b-91ce51118cbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4f3d469b-1a76-4c62-a15b-f74926ed9bd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0a0ae9e7-9838-445d-a955-08f8b177cc0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"74b8de81-b92b-4f91-b492-45b56d652181","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"94c4100a-a655-40f9-bd2f-0a1c6ab21627","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b78b0797-f919-4732-b30d-42ee20a13025","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-523230\" primary control-plane node in \"insufficient-storage-523230\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"73fc4381-da5c-43e7-ab00-9a2f6964690f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726589491-19662 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7791f55e-7d8e-40ef-89c0-bb65acfc5dfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"57b9fd36-ba08-4663-b07f-7b5ee828912d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-523230 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-523230 --output=json --layout=cluster: exit status 7 (306.497473ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-523230","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-523230","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 20:12:02.765076  940202 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-523230" does not appear in /home/jenkins/minikube-integration/19678-715609/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-523230 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-523230 --output=json --layout=cluster: exit status 7 (302.265377ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-523230","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-523230","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 20:12:03.066466  940266 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-523230" does not appear in /home/jenkins/minikube-integration/19678-715609/kubeconfig
	E0920 20:12:03.079126  940266 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/insufficient-storage-523230/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-523230" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-523230
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-523230: (1.662372623s)
--- PASS: TestInsufficientStorage (11.19s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (108.96s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.620260127 start -p running-upgrade-518723 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.620260127 start -p running-upgrade-518723 --memory=2200 --vm-driver=docker  --container-runtime=docker: (43.643473732s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-518723 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-518723 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m1.605521608s)
helpers_test.go:175: Cleaning up "running-upgrade-518723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-518723
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-518723: (2.855206535s)
--- PASS: TestRunningBinaryUpgrade (108.96s)

                                                
                                    
x
+
TestKubernetesUpgrade (379.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-546967 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0920 20:16:39.543406  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:16:39.549746  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:16:39.561053  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:16:39.582414  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:16:39.623756  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:16:39.705042  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:16:39.866511  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:16:40.188167  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:16:40.830101  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:16:42.111489  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:16:44.673543  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:16:49.794892  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-546967 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (52.604683809s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-546967
E0920 20:17:00.036996  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-546967: (10.861033562s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-546967 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-546967 status --format={{.Host}}: exit status 7 (82.366119ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-546967 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0920 20:17:20.519089  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:17:29.947558  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:18:01.480431  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-546967 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m38.811870348s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-546967 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-546967 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-546967 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (113.352105ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-546967] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-715609/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-715609/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-546967
	    minikube start -p kubernetes-upgrade-546967 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5469672 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-546967 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-546967 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0920 20:22:07.244244  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-546967 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.036198936s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-546967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-546967
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-546967: (2.893211845s)
--- PASS: TestKubernetesUpgrade (379.51s)

                                                
                                    
x
+
TestMissingContainerUpgrade (170.44s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1442820853 start -p missing-upgrade-345152 --memory=2200 --driver=docker  --container-runtime=docker
E0920 20:19:26.875648  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:20:08.802360  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1442820853 start -p missing-upgrade-345152 --memory=2200 --driver=docker  --container-runtime=docker: (1m26.585576384s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-345152
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-345152: (10.39684283s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-345152
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-345152 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0920 20:21:39.542884  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-345152 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m9.441612444s)
helpers_test.go:175: Cleaning up "missing-upgrade-345152" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-345152
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-345152: (2.763010411s)
--- PASS: TestMissingContainerUpgrade (170.44s)

                                                
                                    
x
+
TestPause/serial/Start (49.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-404245 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-404245 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (49.72737847s)
--- PASS: TestPause/serial/Start (49.73s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.2s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-404245 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-404245 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (39.169363579s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-186176 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-186176 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (84.218884ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-186176] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-715609/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-715609/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-186176 --driver=docker  --container-runtime=docker
E0920 20:13:11.874182  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-186176 --driver=docker  --container-runtime=docker: (37.969538677s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-186176 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.41s)

                                                
                                    
x
+
TestPause/serial/Pause (0.89s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-404245 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.89s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-404245 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-404245 --output=json --layout=cluster: exit status 2 (491.103132ms)

                                                
                                                
-- stdout --
	{"Name":"pause-404245","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-404245","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.49s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-404245 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-404245 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-404245 --alsologtostderr -v=5: (1.001854496s)
--- PASS: TestPause/serial/PauseAgain (1.00s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.39s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-404245 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-404245 --alsologtostderr -v=5: (2.392311226s)
--- PASS: TestPause/serial/DeletePaused (2.39s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.6s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-404245
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-404245: exit status 1 (25.978323ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-404245: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-186176 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-186176 --no-kubernetes --driver=docker  --container-runtime=docker: (17.723292519s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-186176 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-186176 status -o json: exit status 2 (387.456011ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-186176","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-186176
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-186176: (1.856443457s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-186176 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-186176 --no-kubernetes --driver=docker  --container-runtime=docker: (8.872119442s)
--- PASS: TestNoKubernetes/serial/Start (8.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-186176 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-186176 "sudo systemctl is-active --quiet service kubelet": exit status 1 (362.754669ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-186176
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-186176: (1.312854689s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-186176 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-186176 --driver=docker  --container-runtime=docker: (8.848094132s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-186176 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-186176 "sudo systemctl is-active --quiet service kubelet": exit status 1 (365.764401ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (100.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3423427723 start -p stopped-upgrade-336004 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3423427723 start -p stopped-upgrade-336004 --memory=2200 --vm-driver=docker  --container-runtime=docker: (45.305250439s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3423427723 -p stopped-upgrade-336004 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3423427723 -p stopped-upgrade-336004 stop: (10.918754472s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-336004 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-336004 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (44.220914062s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (100.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-336004
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-336004: (2.335906047s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (91.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-032318 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-032318 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m31.173426434s)
--- PASS: TestNetworkPlugins/group/auto/Start (91.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (80.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-032318 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0920 20:24:26.875437  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:25:08.802965  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-032318 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m20.105941122s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (80.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-tnswt" [de679c23-b179-4f93-8ede-5751428b17b5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004173226s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-032318 "pgrep -a kubelet"
I0920 20:25:30.368195  722379 config.go:182] Loaded profile config "kindnet-032318": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-032318 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m7snd" [cf963db1-9f45-4259-ab40-b9f23b28d046] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-m7snd" [cf963db1-9f45-4259-ab40-b9f23b28d046] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003832884s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-032318 "pgrep -a kubelet"
I0920 20:25:35.275077  722379 config.go:182] Loaded profile config "auto-032318": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-032318 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h8c7s" [98b79fc2-8225-499c-9c0a-afe73ed50444] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-h8c7s" [98b79fc2-8225-499c-9c0a-afe73ed50444] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004646417s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-032318 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-032318 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-032318 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-032318 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-032318 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-032318 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (84.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-032318 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-032318 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m24.722492247s)
--- PASS: TestNetworkPlugins/group/calico/Start (84.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-032318 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0920 20:26:39.543511  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-032318 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m2.484660909s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-032318 "pgrep -a kubelet"
I0920 20:27:15.318618  722379 config.go:182] Loaded profile config "custom-flannel-032318": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-032318 replace --force -f testdata/netcat-deployment.yaml
I0920 20:27:15.663229  722379 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z4x94" [7536355e-2801-45b8-8a02-b5bb42b92bee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z4x94" [7536355e-2801-45b8-8a02-b5bb42b92bee] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005034766s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-032318 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-032318 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-032318 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-78jg2" [da942052-0847-4dc3-98fd-da0e5b7a8e22] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.008339373s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-032318 "pgrep -a kubelet"
I0920 20:27:36.542151  722379 config.go:182] Loaded profile config "calico-032318": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-032318 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m79bx" [b699e016-a433-42f0-9910-17c0c122cc13] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-m79bx" [b699e016-a433-42f0-9910-17c0c122cc13] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003688012s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-032318 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-032318 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-032318 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (58.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-032318 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-032318 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (58.495924627s)
--- PASS: TestNetworkPlugins/group/false/Start (58.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (43.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-032318 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-032318 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (43.456758967s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (43.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-032318 "pgrep -a kubelet"
I0920 20:28:50.828604  722379 config.go:182] Loaded profile config "false-032318": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-032318 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9fntl" [c4050a68-2143-4758-bcb6-53e08ebb3084] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9fntl" [c4050a68-2143-4758-bcb6-53e08ebb3084] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.005476208s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-032318 "pgrep -a kubelet"
I0920 20:29:01.046909  722379 config.go:182] Loaded profile config "enable-default-cni-032318": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-032318 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nnhf5" [a117eae1-cde6-499b-8160-319f95466873] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nnhf5" [a117eae1-cde6-499b-8160-319f95466873] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004164385s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-032318 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-032318 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-032318 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-032318 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-032318 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-032318 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (66.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-032318 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0920 20:29:26.875043  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-032318 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m6.039358708s)
--- PASS: TestNetworkPlugins/group/flannel/Start (66.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (62.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-032318 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0920 20:29:51.876451  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:30:08.802495  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:30:24.074777  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kindnet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:30:24.081162  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kindnet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:30:24.092730  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kindnet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:30:24.114075  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kindnet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:30:24.155458  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kindnet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:30:24.237130  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kindnet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:30:24.398575  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kindnet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:30:24.720172  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kindnet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:30:25.362538  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kindnet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:30:26.643801  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kindnet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:30:29.205855  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kindnet-032318/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-032318 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m2.052437159s)
--- PASS: TestNetworkPlugins/group/bridge/Start (62.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-hqvbl" [d91f1cb2-1e14-48fc-95c1-283ffe39710c] Running
E0920 20:30:34.327588  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kindnet-032318/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00379027s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-032318 "pgrep -a kubelet"
E0920 20:30:35.533931  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/auto-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:30:35.540286  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/auto-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:30:35.552195  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/auto-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:30:35.576231  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/auto-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:30:35.617592  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/auto-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:30:35.699226  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/auto-032318/client.crt: no such file or directory" logger="UnhandledError"
I0920 20:30:35.820560  722379 config.go:182] Loaded profile config "flannel-032318": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-032318 replace --force -f testdata/netcat-deployment.yaml
E0920 20:30:35.861298  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/auto-032318/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7hj6q" [d0c9e57e-60b9-4ba2-8398-372e2f016d5a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 20:30:36.183111  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/auto-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:30:36.825139  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/auto-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:30:38.107130  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/auto-032318/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-7hj6q" [d0c9e57e-60b9-4ba2-8398-372e2f016d5a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003895341s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-032318 "pgrep -a kubelet"
I0920 20:30:39.002638  722379 config.go:182] Loaded profile config "bridge-032318": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-032318 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mmks7" [b86413d3-bbd1-4009-accd-0760e434da0c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 20:30:40.668822  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/auto-032318/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-mmks7" [b86413d3-bbd1-4009-accd-0760e434da0c] Running
E0920 20:30:44.569801  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kindnet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:30:45.790554  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/auto-032318/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.00453405s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-032318 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-032318 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-032318 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-032318 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-032318 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-032318 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (73.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-032318 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-032318 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m13.733343054s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (73.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (157.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-140097 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0920 20:31:16.514190  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/auto-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:31:39.542952  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:31:46.013936  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kindnet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:31:57.476013  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/auto-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:15.637492  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/custom-flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:15.643909  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/custom-flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:15.655384  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/custom-flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:15.676806  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/custom-flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:15.718193  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/custom-flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:15.799582  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/custom-flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:15.961089  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/custom-flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:16.282748  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/custom-flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:16.925035  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/custom-flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:18.206589  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/custom-flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:20.768244  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/custom-flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:25.890602  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/custom-flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-140097 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m37.589826462s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (157.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-032318 "pgrep -a kubelet"
I0920 20:32:27.066403  722379 config.go:182] Loaded profile config "kubenet-032318": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-032318 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wh8fb" [d77753ab-1ec1-412d-aa1b-8624dc4ed6a2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 20:32:30.097857  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/calico-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:30.104256  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/calico-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:30.116030  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/calico-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:30.137413  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/calico-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:30.178780  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/calico-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:30.260225  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/calico-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:30.421666  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/calico-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:30.743288  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/calico-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:31.385368  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/calico-032318/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-wh8fb" [d77753ab-1ec1-412d-aa1b-8624dc4ed6a2] Running
E0920 20:32:32.667617  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/calico-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:35.229491  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/calico-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:36.133343  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/custom-flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.003259113s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-032318 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-032318 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-032318 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)
E0920 20:43:51.161828  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/false-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:43:52.819833  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/old-k8s-version-140097/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:44:01.351164  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/enable-default-cni-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:44:19.535818  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/no-preload-914403/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:44:19.542321  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/no-preload-914403/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:44:19.553726  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/no-preload-914403/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:44:19.575228  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/no-preload-914403/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:44:19.616703  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/no-preload-914403/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:44:19.698269  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/no-preload-914403/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:44:19.859868  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/no-preload-914403/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:44:20.181989  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/no-preload-914403/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:44:20.525772  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/old-k8s-version-140097/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:44:20.823362  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/no-preload-914403/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:44:22.105287  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/no-preload-914403/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:44:24.666779  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/no-preload-914403/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:44:26.875474  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:44:29.788489  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/no-preload-914403/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:44:40.030747  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/no-preload-914403/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:45:00.513037  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/no-preload-914403/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:45:08.802407  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:45:24.074605  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kindnet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:45:29.528679  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/flannel-032318/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (80.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-914403 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 20:33:02.606182  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:33:07.936360  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kindnet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:33:11.075327  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/calico-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:33:19.398192  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/auto-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:33:37.576745  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/custom-flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:33:51.161351  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/false-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:33:51.167750  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/false-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:33:51.179092  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/false-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:33:51.200491  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/false-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:33:51.241929  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/false-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:33:51.323230  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/false-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:33:51.485162  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/false-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:33:51.806610  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/false-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:33:52.037520  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/calico-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:33:52.448764  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/false-032318/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-914403 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m20.241288998s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (80.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-140097 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5142a621-81c1-4e4e-939a-5b641a7f4dc0] Pending
E0920 20:33:53.730101  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/false-032318/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [5142a621-81c1-4e4e-939a-5b641a7f4dc0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0920 20:33:56.292167  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/false-032318/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [5142a621-81c1-4e4e-939a-5b641a7f4dc0] Running
E0920 20:34:01.352039  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/enable-default-cni-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:34:01.358527  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/enable-default-cni-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:34:01.369918  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/enable-default-cni-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:34:01.391465  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/enable-default-cni-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:34:01.413948  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/false-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:34:01.433486  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/enable-default-cni-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:34:01.514903  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/enable-default-cni-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:34:01.676467  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/enable-default-cni-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:34:01.997801  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/enable-default-cni-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:34:02.639823  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/enable-default-cni-032318/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.00319203s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-140097 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-140097 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0920 20:34:03.921452  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/enable-default-cni-032318/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-140097 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.033130662s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-140097 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-140097 --alsologtostderr -v=3
E0920 20:34:06.483551  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/enable-default-cni-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:34:09.949569  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:34:11.605054  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/enable-default-cni-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:34:11.655436  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/false-032318/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-140097 --alsologtostderr -v=3: (11.073529116s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-140097 -n old-k8s-version-140097
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-140097 -n old-k8s-version-140097: exit status 7 (78.307482ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-140097 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (124.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-140097 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-140097 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m4.131552428s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-140097 -n old-k8s-version-140097
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (124.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-914403 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bbdc9607-f2b7-4919-87f2-9663d2776b56] Pending
helpers_test.go:344: "busybox" [bbdc9607-f2b7-4919-87f2-9663d2776b56] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0920 20:34:21.846992  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/enable-default-cni-032318/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [bbdc9607-f2b7-4919-87f2-9663d2776b56] Running
E0920 20:34:26.875574  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/functional-087953/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005070311s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-914403 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-914403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-914403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.470488136s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-914403 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-914403 --alsologtostderr -v=3
E0920 20:34:32.137952  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/false-032318/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-914403 --alsologtostderr -v=3: (11.127512111s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-914403 -n no-preload-914403
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-914403 -n no-preload-914403: exit status 7 (82.053332ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-914403 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (268.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-914403 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 20:34:42.328474  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/enable-default-cni-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:34:59.498429  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/custom-flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:08.803343  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:13.099264  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/false-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:13.959211  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/calico-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:23.290335  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/enable-default-cni-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:24.075356  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kindnet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:29.528727  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:29.535272  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:29.546825  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:29.568366  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:29.609869  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:29.691361  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:29.852847  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:30.174432  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:30.816563  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:32.098118  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:34.660356  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:35.533932  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/auto-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:39.354106  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/bridge-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:39.360509  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/bridge-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:39.371922  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/bridge-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:39.393308  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/bridge-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:39.434825  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/bridge-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:39.516125  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/bridge-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:39.678064  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/bridge-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:39.782549  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:40.000136  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/bridge-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:40.646402  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/bridge-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:41.928206  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/bridge-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:44.490598  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/bridge-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:49.612861  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/bridge-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:50.024761  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:51.777684  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kindnet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:59.855044  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/bridge-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:36:03.239589  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/auto-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:36:10.506153  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-914403 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m27.887191425s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-914403 -n no-preload-914403
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (268.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-2rmk7" [6314cb8a-10a5-48aa-9c94-ce7cfe0e6a67] Running
E0920 20:36:20.336504  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/bridge-032318/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004722992s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-2rmk7" [6314cb8a-10a5-48aa-9c94-ce7cfe0e6a67] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003704241s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-140097 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-140097 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-140097 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-140097 -n old-k8s-version-140097
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-140097 -n old-k8s-version-140097: exit status 2 (363.095418ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-140097 -n old-k8s-version-140097
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-140097 -n old-k8s-version-140097: exit status 2 (344.904981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-140097 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-140097 -n old-k8s-version-140097
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-140097 -n old-k8s-version-140097
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (47.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-578066 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 20:36:39.543292  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:36:45.212416  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/enable-default-cni-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:36:51.467655  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:37:01.297911  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/bridge-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:37:15.637536  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/custom-flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-578066 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (47.875818524s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (47.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-578066 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [146bc00d-2b2f-4e61-9fb9-9827d5c0d1b5] Pending
helpers_test.go:344: "busybox" [146bc00d-2b2f-4e61-9fb9-9827d5c0d1b5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0920 20:37:27.405355  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kubenet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:37:27.411845  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kubenet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:37:27.423298  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kubenet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:37:27.444791  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kubenet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:37:27.486328  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kubenet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:37:27.567737  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kubenet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:37:27.733515  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kubenet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:37:28.059036  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kubenet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:37:28.701214  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kubenet-032318/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [146bc00d-2b2f-4e61-9fb9-9827d5c0d1b5] Running
E0920 20:37:29.982735  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kubenet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:37:30.098102  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/calico-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:37:32.544655  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kubenet-032318/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003936952s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-578066 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-578066 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-578066 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.017815358s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-578066 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-578066 --alsologtostderr -v=3
E0920 20:37:37.666756  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kubenet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:37:43.342137  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/custom-flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-578066 --alsologtostderr -v=3: (10.886705634s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-578066 -n embed-certs-578066
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-578066 -n embed-certs-578066: exit status 7 (67.903496ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-578066 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (268.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-578066 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 20:37:47.908256  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kubenet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:37:57.801502  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/calico-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:38:08.389511  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kubenet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:38:13.389940  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:38:23.221182  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/bridge-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:38:49.351048  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kubenet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:38:51.161475  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/false-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:38:52.819672  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/old-k8s-version-140097/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:38:52.826136  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/old-k8s-version-140097/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:38:52.837569  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/old-k8s-version-140097/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:38:52.862472  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/old-k8s-version-140097/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:38:52.903875  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/old-k8s-version-140097/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:38:52.985374  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/old-k8s-version-140097/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:38:53.146778  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/old-k8s-version-140097/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:38:53.468066  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/old-k8s-version-140097/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:38:54.110296  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/old-k8s-version-140097/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:38:55.392569  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/old-k8s-version-140097/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:38:57.954440  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/old-k8s-version-140097/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:39:01.351082  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/enable-default-cni-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:39:03.076590  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/old-k8s-version-140097/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-578066 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m27.641027129s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-578066 -n embed-certs-578066
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (268.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qxtmb" [443af2f3-aaac-45b0-892b-799d95927d28] Running
E0920 20:39:13.318335  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/old-k8s-version-140097/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004255966s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qxtmb" [443af2f3-aaac-45b0-892b-799d95927d28] Running
E0920 20:39:18.862361  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/false-032318/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003933407s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-914403 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-914403 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-914403 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-914403 -n no-preload-914403
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-914403 -n no-preload-914403: exit status 2 (329.616758ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-914403 -n no-preload-914403
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-914403 -n no-preload-914403: exit status 2 (339.177402ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-914403 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-914403 -n no-preload-914403
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-914403 -n no-preload-914403
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-917044 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 20:39:29.054642  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/enable-default-cni-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:39:33.800256  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/old-k8s-version-140097/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:40:08.802302  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/addons-711398/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:40:11.274088  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kubenet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:40:14.761866  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/old-k8s-version-140097/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:40:24.075213  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kindnet-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:40:29.528047  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:40:35.534067  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/auto-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:40:39.353663  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/bridge-032318/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-917044 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m12.568665391s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-917044 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6f5b4115-edd8-4014-9c17-6e9961437839] Pending
helpers_test.go:344: "busybox" [6f5b4115-edd8-4014-9c17-6e9961437839] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6f5b4115-edd8-4014-9c17-6e9961437839] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00293196s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-917044 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-917044 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-917044 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.025682996s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-917044 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-917044 --alsologtostderr -v=3
E0920 20:40:57.231235  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-917044 --alsologtostderr -v=3: (11.076266363s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-917044 -n default-k8s-diff-port-917044
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-917044 -n default-k8s-diff-port-917044: exit status 7 (77.993917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-917044 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-917044 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 20:41:07.063894  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/bridge-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:41:36.683578  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/old-k8s-version-140097/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:41:39.542954  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/skaffold-055219/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-917044 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m28.183406924s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-917044 -n default-k8s-diff-port-917044
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gngj7" [802eeaac-54cc-4bf3-ab6b-45e6b10e3b6f] Running
E0920 20:42:15.637355  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/custom-flannel-032318/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003883355s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gngj7" [802eeaac-54cc-4bf3-ab6b-45e6b10e3b6f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005873433s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-578066 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-578066 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-578066 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-578066 -n embed-certs-578066
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-578066 -n embed-certs-578066: exit status 2 (320.502469ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-578066 -n embed-certs-578066
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-578066 -n embed-certs-578066: exit status 2 (327.442027ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-578066 --alsologtostderr -v=1
E0920 20:42:27.405333  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kubenet-032318/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-578066 -n embed-certs-578066
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-578066 -n embed-certs-578066
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-757362 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 20:42:55.116200  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/kubenet-032318/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-757362 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (36.878807204s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-757362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-757362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.266839521s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-757362 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-757362 --alsologtostderr -v=3: (9.639564611s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-757362 -n newest-cni-757362
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-757362 -n newest-cni-757362: exit status 7 (73.433057ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-757362 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-757362 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-757362 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (17.139924963s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-757362 -n newest-cni-757362
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-757362 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-757362 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-757362 -n newest-cni-757362
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-757362 -n newest-cni-757362: exit status 2 (342.935982ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-757362 -n newest-cni-757362
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-757362 -n newest-cni-757362: exit status 2 (322.46523ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-757362 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-757362 -n newest-cni-757362
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-757362 -n newest-cni-757362
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-58fr6" [ea8c285e-0268-424c-b3b1-baf9e25c5f7f] Running
E0920 20:45:35.534040  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/auto-032318/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004396258s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-58fr6" [ea8c285e-0268-424c-b3b1-baf9e25c5f7f] Running
E0920 20:45:39.353382  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/bridge-032318/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:45:41.474844  722379 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/no-preload-914403/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003540405s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-917044 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-917044 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-917044 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-917044 -n default-k8s-diff-port-917044
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-917044 -n default-k8s-diff-port-917044: exit status 2 (310.998446ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-917044 -n default-k8s-diff-port-917044
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-917044 -n default-k8s-diff-port-917044: exit status 2 (333.925968ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-917044 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-917044 -n default-k8s-diff-port-917044
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-917044 -n default-k8s-diff-port-917044
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.76s)

                                                
                                    

Test skip (23/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.53s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-499597 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-499597" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-499597
--- SKIP: TestDownloadOnlyKic (0.53s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-032318 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-032318

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-032318

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-032318

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-032318

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-032318

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-032318

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-032318

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-032318

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-032318

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-032318

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-032318

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-032318" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-032318" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-032318" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-032318" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-032318" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-032318" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-032318" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-032318" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-032318

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-032318

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-032318" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-032318" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-032318

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-032318

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-032318" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-032318" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-032318" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-032318" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-032318" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19678-715609/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 20 Sep 2024 20:13:40 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: NoKubernetes-186176
contexts:
- context:
cluster: NoKubernetes-186176
extensions:
- extension:
last-update: Fri, 20 Sep 2024 20:13:40 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: NoKubernetes-186176
name: NoKubernetes-186176
current-context: NoKubernetes-186176
kind: Config
preferences: {}
users:
- name: NoKubernetes-186176
user:
client-certificate: /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/NoKubernetes-186176/client.crt
client-key: /home/jenkins/minikube-integration/19678-715609/.minikube/profiles/NoKubernetes-186176/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-032318

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-032318" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-032318"

                                                
                                                
----------------------- debugLogs end: cilium-032318 [took: 6.08688255s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-032318" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-032318
--- SKIP: TestNetworkPlugins/group/cilium (6.29s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-321489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-321489
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard