Test Report: Docker_Linux 19734

                    
                      795b96072c2ea51545c2bdfc984dcdf8fe273799:2024-09-30:36435
                    
                

Test fail (1/342)

Order failed test Duration
33 TestAddons/parallel/Registry 72.32
x
+
TestAddons/parallel/Registry (72.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.304235ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-9bg4w" [da79db35-9dbe-40b6-bc10-153757b8bf2a] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002563452s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8lrkc" [0863352b-681f-45ef-a925-ee3ba3eb1198] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002857322s
addons_test.go:338: (dbg) Run:  kubectl --context addons-485025 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-485025 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-485025 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.075592196s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-485025 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-485025 ip
2024/09/30 10:33:53 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-485025 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-485025
helpers_test.go:235: (dbg) docker inspect addons-485025:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "57c067488a26c5159be154c8352674e3f1d4a9cff700da00ad1c2b4e5cdb879d",
	        "Created": "2024-09-30T10:20:56.786096188Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 12517,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-30T10:20:56.915680776Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fba5f082b59effd6acfcb1eed3d3f86a23bd3a65463877f8197a730d49f52a09",
	        "ResolvConfPath": "/var/lib/docker/containers/57c067488a26c5159be154c8352674e3f1d4a9cff700da00ad1c2b4e5cdb879d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/57c067488a26c5159be154c8352674e3f1d4a9cff700da00ad1c2b4e5cdb879d/hostname",
	        "HostsPath": "/var/lib/docker/containers/57c067488a26c5159be154c8352674e3f1d4a9cff700da00ad1c2b4e5cdb879d/hosts",
	        "LogPath": "/var/lib/docker/containers/57c067488a26c5159be154c8352674e3f1d4a9cff700da00ad1c2b4e5cdb879d/57c067488a26c5159be154c8352674e3f1d4a9cff700da00ad1c2b4e5cdb879d-json.log",
	        "Name": "/addons-485025",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-485025:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-485025",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1d326d47c67abcde5c405b5d2bc3203ddd9f5ed2ad55983bd8b9ac84aa3c1947-init/diff:/var/lib/docker/overlay2/71ddbbc874c8012f0ca6cba309f810cb206996525979cb1107bf2f7cf9f42c72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1d326d47c67abcde5c405b5d2bc3203ddd9f5ed2ad55983bd8b9ac84aa3c1947/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1d326d47c67abcde5c405b5d2bc3203ddd9f5ed2ad55983bd8b9ac84aa3c1947/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1d326d47c67abcde5c405b5d2bc3203ddd9f5ed2ad55983bd8b9ac84aa3c1947/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-485025",
	                "Source": "/var/lib/docker/volumes/addons-485025/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-485025",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-485025",
	                "name.minikube.sigs.k8s.io": "addons-485025",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4c4a90f8c532aa8d22318a0057de685acfefe6d07f4992823fa5550e582622a4",
	            "SandboxKey": "/var/run/docker/netns/4c4a90f8c532",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-485025": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a5e83fc3193b7ccbf4d12708117400c9967d758e8e666093fc0024b60a1253fc",
	                    "EndpointID": "9aeabceb5b5d57a1b1623f8d8275166e161a072232abcdc94d3d91969c077b8b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-485025",
	                        "57c067488a26"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-485025 -n addons-485025
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-485025 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-079911 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |                     |
	|         | download-docker-079911                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-079911                                                                   | download-docker-079911 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-919884   | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |                     |
	|         | binary-mirror-919884                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33823                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-919884                                                                     | binary-mirror-919884   | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| addons  | disable dashboard -p                                                                        | addons-485025          | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |                     |
	|         | addons-485025                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-485025          | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |                     |
	|         | addons-485025                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-485025 --wait=true                                                                | addons-485025          | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-485025 addons disable                                                                | addons-485025          | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC | 30 Sep 24 10:24 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-485025          | jenkins | v1.34.0 | 30 Sep 24 10:32 UTC | 30 Sep 24 10:32 UTC |
	|         | -p addons-485025                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-485025 addons                                                                        | addons-485025          | jenkins | v1.34.0 | 30 Sep 24 10:32 UTC | 30 Sep 24 10:32 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-485025          | jenkins | v1.34.0 | 30 Sep 24 10:32 UTC | 30 Sep 24 10:32 UTC |
	|         | addons-485025                                                                               |                        |         |         |                     |                     |
	| addons  | addons-485025 addons disable                                                                | addons-485025          | jenkins | v1.34.0 | 30 Sep 24 10:32 UTC | 30 Sep 24 10:32 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-485025 ssh curl -s                                                                   | addons-485025          | jenkins | v1.34.0 | 30 Sep 24 10:32 UTC | 30 Sep 24 10:32 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-485025 ip                                                                            | addons-485025          | jenkins | v1.34.0 | 30 Sep 24 10:32 UTC | 30 Sep 24 10:32 UTC |
	| addons  | addons-485025 addons disable                                                                | addons-485025          | jenkins | v1.34.0 | 30 Sep 24 10:32 UTC | 30 Sep 24 10:32 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-485025 addons disable                                                                | addons-485025          | jenkins | v1.34.0 | 30 Sep 24 10:32 UTC | 30 Sep 24 10:33 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-485025          | jenkins | v1.34.0 | 30 Sep 24 10:33 UTC | 30 Sep 24 10:33 UTC |
	|         | -p addons-485025                                                                            |                        |         |         |                     |                     |
	| addons  | addons-485025 addons disable                                                                | addons-485025          | jenkins | v1.34.0 | 30 Sep 24 10:33 UTC | 30 Sep 24 10:33 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-485025 ssh cat                                                                       | addons-485025          | jenkins | v1.34.0 | 30 Sep 24 10:33 UTC | 30 Sep 24 10:33 UTC |
	|         | /opt/local-path-provisioner/pvc-c9d28883-8cdc-411a-b481-ed6040da0be1_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-485025 addons disable                                                                | addons-485025          | jenkins | v1.34.0 | 30 Sep 24 10:33 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-485025          | jenkins | v1.34.0 | 30 Sep 24 10:33 UTC | 30 Sep 24 10:33 UTC |
	|         | addons-485025                                                                               |                        |         |         |                     |                     |
	| addons  | addons-485025 addons                                                                        | addons-485025          | jenkins | v1.34.0 | 30 Sep 24 10:33 UTC | 30 Sep 24 10:33 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-485025 addons                                                                        | addons-485025          | jenkins | v1.34.0 | 30 Sep 24 10:33 UTC | 30 Sep 24 10:33 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-485025 ip                                                                            | addons-485025          | jenkins | v1.34.0 | 30 Sep 24 10:33 UTC | 30 Sep 24 10:33 UTC |
	| addons  | addons-485025 addons disable                                                                | addons-485025          | jenkins | v1.34.0 | 30 Sep 24 10:33 UTC | 30 Sep 24 10:33 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 10:20:33
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 10:20:33.219133   11756 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:20:33.219357   11756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:33.219365   11756 out.go:358] Setting ErrFile to fd 2...
	I0930 10:20:33.219369   11756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:33.219541   11756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3685/.minikube/bin
	I0930 10:20:33.220132   11756 out.go:352] Setting JSON to false
	I0930 10:20:33.220967   11756 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":181,"bootTime":1727691452,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 10:20:33.221062   11756 start.go:139] virtualization: kvm guest
	I0930 10:20:33.223169   11756 out.go:177] * [addons-485025] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 10:20:33.224434   11756 notify.go:220] Checking for updates...
	I0930 10:20:33.224437   11756 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 10:20:33.225796   11756 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:20:33.227170   11756 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3685/kubeconfig
	I0930 10:20:33.228530   11756 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3685/.minikube
	I0930 10:20:33.229724   11756 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 10:20:33.231239   11756 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 10:20:33.232760   11756 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:20:33.254767   11756 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:20:33.254851   11756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:20:33.298501   11756 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-30 10:20:33.289418428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0930 10:20:33.298646   11756 docker.go:318] overlay module found
	I0930 10:20:33.301167   11756 out.go:177] * Using the docker driver based on user configuration
	I0930 10:20:33.302222   11756 start.go:297] selected driver: docker
	I0930 10:20:33.302234   11756 start.go:901] validating driver "docker" against <nil>
	I0930 10:20:33.302244   11756 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 10:20:33.303029   11756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:20:33.345538   11756 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-30 10:20:33.337483411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0930 10:20:33.345676   11756 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 10:20:33.345911   11756 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 10:20:33.347559   11756 out.go:177] * Using Docker driver with root privileges
	I0930 10:20:33.348677   11756 cni.go:84] Creating CNI manager for ""
	I0930 10:20:33.348746   11756 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 10:20:33.348762   11756 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 10:20:33.348848   11756 start.go:340] cluster config:
	{Name:addons-485025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-485025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:20:33.350140   11756 out.go:177] * Starting "addons-485025" primary control-plane node in "addons-485025" cluster
	I0930 10:20:33.351199   11756 cache.go:121] Beginning downloading kic base image for docker with docker
	I0930 10:20:33.352360   11756 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0930 10:20:33.353610   11756 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 10:20:33.353640   11756 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-3685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0930 10:20:33.353647   11756 cache.go:56] Caching tarball of preloaded images
	I0930 10:20:33.353697   11756 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0930 10:20:33.353716   11756 preload.go:172] Found /home/jenkins/minikube-integration/19734-3685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0930 10:20:33.353723   11756 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 10:20:33.354029   11756 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/config.json ...
	I0930 10:20:33.354050   11756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/config.json: {Name:mk35b0d5ca357d92893ad556a0bad6107bb98cc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:20:33.368962   11756 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0930 10:20:33.369057   11756 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0930 10:20:33.369072   11756 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0930 10:20:33.369076   11756 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0930 10:20:33.369085   11756 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0930 10:20:33.369092   11756 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from local cache
	I0930 10:20:45.093181   11756 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from cached tarball
	I0930 10:20:45.093220   11756 cache.go:194] Successfully downloaded all kic artifacts
	I0930 10:20:45.093263   11756 start.go:360] acquireMachinesLock for addons-485025: {Name:mk599cf391d05d083ec36c01dacad090ed0c2f88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 10:20:45.093351   11756 start.go:364] duration metric: took 69.396µs to acquireMachinesLock for "addons-485025"
	I0930 10:20:45.093373   11756 start.go:93] Provisioning new machine with config: &{Name:addons-485025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-485025 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 10:20:45.093454   11756 start.go:125] createHost starting for "" (driver="docker")
	I0930 10:20:45.095092   11756 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0930 10:20:45.095325   11756 start.go:159] libmachine.API.Create for "addons-485025" (driver="docker")
	I0930 10:20:45.095363   11756 client.go:168] LocalClient.Create starting
	I0930 10:20:45.095475   11756 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19734-3685/.minikube/certs/ca.pem
	I0930 10:20:45.343341   11756 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19734-3685/.minikube/certs/cert.pem
	I0930 10:20:45.536576   11756 cli_runner.go:164] Run: docker network inspect addons-485025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0930 10:20:45.551603   11756 cli_runner.go:211] docker network inspect addons-485025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0930 10:20:45.551671   11756 network_create.go:284] running [docker network inspect addons-485025] to gather additional debugging logs...
	I0930 10:20:45.551689   11756 cli_runner.go:164] Run: docker network inspect addons-485025
	W0930 10:20:45.566238   11756 cli_runner.go:211] docker network inspect addons-485025 returned with exit code 1
	I0930 10:20:45.566261   11756 network_create.go:287] error running [docker network inspect addons-485025]: docker network inspect addons-485025: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-485025 not found
	I0930 10:20:45.566276   11756 network_create.go:289] output of [docker network inspect addons-485025]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-485025 not found
	
	** /stderr **
	I0930 10:20:45.566354   11756 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0930 10:20:45.581646   11756 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001500a60}
	I0930 10:20:45.581689   11756 network_create.go:124] attempt to create docker network addons-485025 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0930 10:20:45.581732   11756 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-485025 addons-485025
	I0930 10:20:45.640434   11756 network_create.go:108] docker network addons-485025 192.168.49.0/24 created
	I0930 10:20:45.640530   11756 kic.go:121] calculated static IP "192.168.49.2" for the "addons-485025" container
	I0930 10:20:45.640583   11756 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0930 10:20:45.655364   11756 cli_runner.go:164] Run: docker volume create addons-485025 --label name.minikube.sigs.k8s.io=addons-485025 --label created_by.minikube.sigs.k8s.io=true
	I0930 10:20:45.671716   11756 oci.go:103] Successfully created a docker volume addons-485025
	I0930 10:20:45.671776   11756 cli_runner.go:164] Run: docker run --rm --name addons-485025-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-485025 --entrypoint /usr/bin/test -v addons-485025:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
	I0930 10:20:52.869918   11756 cli_runner.go:217] Completed: docker run --rm --name addons-485025-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-485025 --entrypoint /usr/bin/test -v addons-485025:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib: (7.198105142s)
	I0930 10:20:52.869942   11756 oci.go:107] Successfully prepared a docker volume addons-485025
	I0930 10:20:52.869957   11756 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 10:20:52.869974   11756 kic.go:194] Starting extracting preloaded images to volume ...
	I0930 10:20:52.870015   11756 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19734-3685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-485025:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
	I0930 10:20:56.724866   11756 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19734-3685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-485025:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (3.854797796s)
	I0930 10:20:56.724902   11756 kic.go:203] duration metric: took 3.854924762s to extract preloaded images to volume ...
	W0930 10:20:56.725054   11756 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0930 10:20:56.725191   11756 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0930 10:20:56.770876   11756 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-485025 --name addons-485025 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-485025 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-485025 --network addons-485025 --ip 192.168.49.2 --volume addons-485025:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
	I0930 10:20:57.086568   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Running}}
	I0930 10:20:57.103912   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Status}}
	I0930 10:20:57.121566   11756 cli_runner.go:164] Run: docker exec addons-485025 stat /var/lib/dpkg/alternatives/iptables
	I0930 10:20:57.161487   11756 oci.go:144] the created container "addons-485025" has a running status.
	I0930 10:20:57.161514   11756 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa...
	I0930 10:20:57.413896   11756 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0930 10:20:57.433465   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Status}}
	I0930 10:20:57.453562   11756 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0930 10:20:57.453591   11756 kic_runner.go:114] Args: [docker exec --privileged addons-485025 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0930 10:20:57.492949   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Status}}
	I0930 10:20:57.513671   11756 machine.go:93] provisionDockerMachine start ...
	I0930 10:20:57.513769   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:20:57.533177   11756 main.go:141] libmachine: Using SSH client type: native
	I0930 10:20:57.533391   11756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0930 10:20:57.533407   11756 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 10:20:57.659673   11756 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-485025
	
	I0930 10:20:57.659698   11756 ubuntu.go:169] provisioning hostname "addons-485025"
	I0930 10:20:57.659756   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:20:57.677329   11756 main.go:141] libmachine: Using SSH client type: native
	I0930 10:20:57.677522   11756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0930 10:20:57.677544   11756 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-485025 && echo "addons-485025" | sudo tee /etc/hostname
	I0930 10:20:57.798265   11756 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-485025
	
	I0930 10:20:57.798336   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:20:57.814131   11756 main.go:141] libmachine: Using SSH client type: native
	I0930 10:20:57.814300   11756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0930 10:20:57.814317   11756 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-485025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-485025/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-485025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 10:20:57.928277   11756 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 10:20:57.928307   11756 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3685/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3685/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3685/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3685/.minikube}
	I0930 10:20:57.928367   11756 ubuntu.go:177] setting up certificates
	I0930 10:20:57.928379   11756 provision.go:84] configureAuth start
	I0930 10:20:57.928421   11756 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-485025
	I0930 10:20:57.944024   11756 provision.go:143] copyHostCerts
	I0930 10:20:57.944086   11756 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3685/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3685/.minikube/ca.pem (1078 bytes)
	I0930 10:20:57.944206   11756 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3685/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3685/.minikube/cert.pem (1123 bytes)
	I0930 10:20:57.944279   11756 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3685/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3685/.minikube/key.pem (1675 bytes)
	I0930 10:20:57.944356   11756 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3685/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3685/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3685/.minikube/certs/ca-key.pem org=jenkins.addons-485025 san=[127.0.0.1 192.168.49.2 addons-485025 localhost minikube]
	I0930 10:20:58.113651   11756 provision.go:177] copyRemoteCerts
	I0930 10:20:58.113705   11756 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 10:20:58.113738   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:20:58.129515   11756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa Username:docker}
	I0930 10:20:58.212861   11756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3685/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 10:20:58.234883   11756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3685/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0930 10:20:58.255499   11756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3685/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 10:20:58.275355   11756 provision.go:87] duration metric: took 346.962396ms to configureAuth
	I0930 10:20:58.275390   11756 ubuntu.go:193] setting minikube options for container-runtime
	I0930 10:20:58.275546   11756 config.go:182] Loaded profile config "addons-485025": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 10:20:58.275597   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:20:58.292012   11756 main.go:141] libmachine: Using SSH client type: native
	I0930 10:20:58.292177   11756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0930 10:20:58.292189   11756 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0930 10:20:58.404299   11756 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0930 10:20:58.404345   11756 ubuntu.go:71] root file system type: overlay
	I0930 10:20:58.404508   11756 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0930 10:20:58.404585   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:20:58.420748   11756 main.go:141] libmachine: Using SSH client type: native
	I0930 10:20:58.420915   11756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0930 10:20:58.420972   11756 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0930 10:20:58.542243   11756 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0930 10:20:58.542323   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:20:58.558317   11756 main.go:141] libmachine: Using SSH client type: native
	I0930 10:20:58.558477   11756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0930 10:20:58.558495   11756 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0930 10:20:59.220674   11756 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-20 11:39:29.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-30 10:20:58.537302378 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0930 10:20:59.220706   11756 machine.go:96] duration metric: took 1.707008832s to provisionDockerMachine
	I0930 10:20:59.220719   11756 client.go:171] duration metric: took 14.125345483s to LocalClient.Create
	I0930 10:20:59.220741   11756 start.go:167] duration metric: took 14.125415591s to libmachine.API.Create "addons-485025"
	I0930 10:20:59.220751   11756 start.go:293] postStartSetup for "addons-485025" (driver="docker")
	I0930 10:20:59.220761   11756 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 10:20:59.220819   11756 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 10:20:59.220859   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:20:59.237212   11756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa Username:docker}
	I0930 10:20:59.320350   11756 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 10:20:59.323087   11756 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0930 10:20:59.323114   11756 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0930 10:20:59.323125   11756 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0930 10:20:59.323130   11756 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0930 10:20:59.323141   11756 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3685/.minikube/addons for local assets ...
	I0930 10:20:59.323195   11756 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3685/.minikube/files for local assets ...
	I0930 10:20:59.323217   11756 start.go:296] duration metric: took 102.46109ms for postStartSetup
	I0930 10:20:59.323466   11756 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-485025
	I0930 10:20:59.338761   11756 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/config.json ...
	I0930 10:20:59.338995   11756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:20:59.339031   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:20:59.354357   11756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa Username:docker}
	I0930 10:20:59.432660   11756 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0930 10:20:59.436441   11756 start.go:128] duration metric: took 14.342975669s to createHost
	I0930 10:20:59.436464   11756 start.go:83] releasing machines lock for "addons-485025", held for 14.343101517s
	I0930 10:20:59.436517   11756 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-485025
	I0930 10:20:59.452089   11756 ssh_runner.go:195] Run: cat /version.json
	I0930 10:20:59.452131   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:20:59.452176   11756 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 10:20:59.452243   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:20:59.468987   11756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa Username:docker}
	I0930 10:20:59.469635   11756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa Username:docker}
	I0930 10:20:59.614213   11756 ssh_runner.go:195] Run: systemctl --version
	I0930 10:20:59.617896   11756 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0930 10:20:59.621440   11756 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0930 10:20:59.641760   11756 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0930 10:20:59.641816   11756 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 10:20:59.664706   11756 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0930 10:20:59.664727   11756 start.go:495] detecting cgroup driver to use...
	I0930 10:20:59.664761   11756 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0930 10:20:59.664855   11756 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 10:20:59.677870   11756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0930 10:20:59.685609   11756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0930 10:20:59.693489   11756 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0930 10:20:59.693550   11756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0930 10:20:59.701321   11756 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0930 10:20:59.709250   11756 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0930 10:20:59.717110   11756 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0930 10:20:59.725114   11756 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 10:20:59.732630   11756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0930 10:20:59.740614   11756 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0930 10:20:59.748687   11756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0930 10:20:59.756493   11756 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 10:20:59.763258   11756 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 10:20:59.763300   11756 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 10:20:59.774506   11756 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 10:20:59.781115   11756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:20:59.849023   11756 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0930 10:20:59.921415   11756 start.go:495] detecting cgroup driver to use...
	I0930 10:20:59.921479   11756 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0930 10:20:59.921523   11756 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0930 10:20:59.931848   11756 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0930 10:20:59.931915   11756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0930 10:20:59.943115   11756 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 10:20:59.958258   11756 ssh_runner.go:195] Run: which cri-dockerd
	I0930 10:20:59.961374   11756 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0930 10:20:59.970585   11756 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0930 10:20:59.988236   11756 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0930 10:21:00.087140   11756 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0930 10:21:00.177813   11756 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0930 10:21:00.177931   11756 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0930 10:21:00.193665   11756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:21:00.265382   11756 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0930 10:21:00.498726   11756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0930 10:21:00.508842   11756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0930 10:21:00.518287   11756 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0930 10:21:00.593272   11756 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0930 10:21:00.664526   11756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:21:00.738087   11756 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0930 10:21:00.749792   11756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0930 10:21:00.759287   11756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:21:00.830699   11756 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0930 10:21:00.888454   11756 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0930 10:21:00.888529   11756 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0930 10:21:00.891663   11756 start.go:563] Will wait 60s for crictl version
	I0930 10:21:00.891714   11756 ssh_runner.go:195] Run: which crictl
	I0930 10:21:00.894654   11756 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 10:21:00.924382   11756 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0930 10:21:00.924448   11756 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0930 10:21:00.946157   11756 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0930 10:21:00.970224   11756 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0930 10:21:00.970298   11756 cli_runner.go:164] Run: docker network inspect addons-485025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0930 10:21:00.985001   11756 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0930 10:21:00.988197   11756 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 10:21:00.997681   11756 kubeadm.go:883] updating cluster {Name:addons-485025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-485025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 10:21:00.997780   11756 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 10:21:00.997823   11756 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0930 10:21:01.016460   11756 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0930 10:21:01.016492   11756 docker.go:615] Images already preloaded, skipping extraction
	I0930 10:21:01.016542   11756 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0930 10:21:01.033501   11756 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0930 10:21:01.033528   11756 cache_images.go:84] Images are preloaded, skipping loading
	I0930 10:21:01.033537   11756 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0930 10:21:01.033622   11756 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-485025 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-485025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 10:21:01.033668   11756 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0930 10:21:01.074112   11756 cni.go:84] Creating CNI manager for ""
	I0930 10:21:01.074135   11756 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 10:21:01.074144   11756 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 10:21:01.074161   11756 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-485025 NodeName:addons-485025 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 10:21:01.074275   11756 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-485025"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 10:21:01.074323   11756 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 10:21:01.082038   11756 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 10:21:01.082091   11756 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 10:21:01.089518   11756 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0930 10:21:01.104124   11756 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 10:21:01.118583   11756 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0930 10:21:01.133036   11756 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0930 10:21:01.135706   11756 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 10:21:01.144832   11756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:21:01.224247   11756 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 10:21:01.236076   11756 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025 for IP: 192.168.49.2
	I0930 10:21:01.236094   11756 certs.go:194] generating shared ca certs ...
	I0930 10:21:01.236107   11756 certs.go:226] acquiring lock for ca certs: {Name:mk681cd5e73e48fcc7a587a82627f61623810efe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:01.236206   11756 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3685/.minikube/ca.key
	I0930 10:21:01.334683   11756 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3685/.minikube/ca.crt ...
	I0930 10:21:01.334710   11756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3685/.minikube/ca.crt: {Name:mkdee3312b387c39866499d029665cc3f900e216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:01.334866   11756 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3685/.minikube/ca.key ...
	I0930 10:21:01.334876   11756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3685/.minikube/ca.key: {Name:mk407f9207954f7156758dee60d9450547f464a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:01.334946   11756 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3685/.minikube/proxy-client-ca.key
	I0930 10:21:01.424081   11756 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3685/.minikube/proxy-client-ca.crt ...
	I0930 10:21:01.424106   11756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3685/.minikube/proxy-client-ca.crt: {Name:mk7a32903d7331d5a1ac8dd98f3fadc8e85d608d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:01.424246   11756 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3685/.minikube/proxy-client-ca.key ...
	I0930 10:21:01.424256   11756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3685/.minikube/proxy-client-ca.key: {Name:mk908174bbb25f270b91f85987a291c59ffcb1a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:01.424314   11756 certs.go:256] generating profile certs ...
	I0930 10:21:01.424385   11756 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.key
	I0930 10:21:01.424402   11756 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt with IP's: []
	I0930 10:21:01.600394   11756 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt ...
	I0930 10:21:01.600420   11756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: {Name:mk83ac12329b857a1fef1eebd94b6263c29b0ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:01.600570   11756 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.key ...
	I0930 10:21:01.600579   11756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.key: {Name:mk63374181f78b45cccbae715d9b62677444f222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:01.600644   11756 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/apiserver.key.301cc93f
	I0930 10:21:01.600662   11756 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/apiserver.crt.301cc93f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0930 10:21:01.732610   11756 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/apiserver.crt.301cc93f ...
	I0930 10:21:01.732639   11756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/apiserver.crt.301cc93f: {Name:mk7dabdde80e57d85cae52592cdbcbb2db8c842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:01.732785   11756 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/apiserver.key.301cc93f ...
	I0930 10:21:01.732795   11756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/apiserver.key.301cc93f: {Name:mk76c6232d9c7f5efdefb49845b7c304ace06c0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:01.732868   11756 certs.go:381] copying /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/apiserver.crt.301cc93f -> /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/apiserver.crt
	I0930 10:21:01.732937   11756 certs.go:385] copying /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/apiserver.key.301cc93f -> /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/apiserver.key
	I0930 10:21:01.732981   11756 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/proxy-client.key
	I0930 10:21:01.732999   11756 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/proxy-client.crt with IP's: []
	I0930 10:21:01.879796   11756 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/proxy-client.crt ...
	I0930 10:21:01.879836   11756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/proxy-client.crt: {Name:mk418eeddcbb7b19c4f94e41b568c7e6aae3a678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:01.880040   11756 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/proxy-client.key ...
	I0930 10:21:01.880056   11756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/proxy-client.key: {Name:mk93e9608c5e44aa9cfe3d6336fe8342484835a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:01.880257   11756 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3685/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 10:21:01.880303   11756 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3685/.minikube/certs/ca.pem (1078 bytes)
	I0930 10:21:01.880362   11756 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3685/.minikube/certs/cert.pem (1123 bytes)
	I0930 10:21:01.880397   11756 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3685/.minikube/certs/key.pem (1675 bytes)
	I0930 10:21:01.881047   11756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3685/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 10:21:01.902522   11756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3685/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 10:21:01.922581   11756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3685/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 10:21:01.942764   11756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3685/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 10:21:01.962610   11756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0930 10:21:01.982652   11756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 10:21:02.003721   11756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 10:21:02.025021   11756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 10:21:02.046049   11756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3685/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 10:21:02.065988   11756 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 10:21:02.080873   11756 ssh_runner.go:195] Run: openssl version
	I0930 10:21:02.085622   11756 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 10:21:02.093399   11756 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:21:02.096334   11756 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:21:02.096389   11756 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:21:02.102353   11756 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 10:21:02.110105   11756 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 10:21:02.112640   11756 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 10:21:02.112682   11756 kubeadm.go:392] StartCluster: {Name:addons-485025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-485025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:21:02.112778   11756 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0930 10:21:02.128932   11756 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 10:21:02.136471   11756 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 10:21:02.143843   11756 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0930 10:21:02.143888   11756 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 10:21:02.151112   11756 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 10:21:02.151131   11756 kubeadm.go:157] found existing configuration files:
	
	I0930 10:21:02.151164   11756 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 10:21:02.158226   11756 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 10:21:02.158280   11756 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 10:21:02.165155   11756 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 10:21:02.172227   11756 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 10:21:02.172273   11756 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 10:21:02.178889   11756 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 10:21:02.185637   11756 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 10:21:02.185686   11756 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 10:21:02.192143   11756 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 10:21:02.198874   11756 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 10:21:02.198907   11756 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 10:21:02.205471   11756 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0930 10:21:02.238813   11756 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 10:21:02.238872   11756 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 10:21:02.257143   11756 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0930 10:21:02.257214   11756 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0930 10:21:02.257251   11756 kubeadm.go:310] OS: Linux
	I0930 10:21:02.257308   11756 kubeadm.go:310] CGROUPS_CPU: enabled
	I0930 10:21:02.257358   11756 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0930 10:21:02.257411   11756 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0930 10:21:02.257468   11756 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0930 10:21:02.257518   11756 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0930 10:21:02.257612   11756 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0930 10:21:02.257656   11756 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0930 10:21:02.257726   11756 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0930 10:21:02.257804   11756 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0930 10:21:02.304155   11756 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 10:21:02.304367   11756 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 10:21:02.304515   11756 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 10:21:02.314270   11756 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 10:21:02.317649   11756 out.go:235]   - Generating certificates and keys ...
	I0930 10:21:02.317750   11756 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 10:21:02.317850   11756 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 10:21:02.486398   11756 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 10:21:02.589470   11756 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 10:21:02.670718   11756 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 10:21:02.785854   11756 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 10:21:02.829563   11756 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 10:21:02.829697   11756 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-485025 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0930 10:21:02.950949   11756 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 10:21:02.951122   11756 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-485025 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0930 10:21:03.128508   11756 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 10:21:03.307756   11756 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 10:21:03.515121   11756 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 10:21:03.515207   11756 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 10:21:03.826089   11756 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 10:21:04.065847   11756 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 10:21:04.320280   11756 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 10:21:04.775424   11756 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 10:21:04.855255   11756 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 10:21:04.855849   11756 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 10:21:04.858139   11756 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 10:21:04.860027   11756 out.go:235]   - Booting up control plane ...
	I0930 10:21:04.860124   11756 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 10:21:04.860215   11756 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 10:21:04.860313   11756 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 10:21:04.868681   11756 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 10:21:04.873347   11756 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 10:21:04.873415   11756 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 10:21:04.952159   11756 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 10:21:04.952355   11756 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 10:21:05.453403   11756 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.466407ms
	I0930 10:21:05.453493   11756 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 10:21:09.954613   11756 kubeadm.go:310] [api-check] The API server is healthy after 4.501160526s
	I0930 10:21:09.964808   11756 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 10:21:09.973906   11756 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 10:21:09.991429   11756 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 10:21:09.991672   11756 kubeadm.go:310] [mark-control-plane] Marking the node addons-485025 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 10:21:09.998108   11756 kubeadm.go:310] [bootstrap-token] Using token: f4zrib.168z9svcsibzej7f
	I0930 10:21:09.999453   11756 out.go:235]   - Configuring RBAC rules ...
	I0930 10:21:09.999625   11756 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 10:21:10.002377   11756 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 10:21:10.009745   11756 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 10:21:10.011812   11756 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 10:21:10.014068   11756 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 10:21:10.016185   11756 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 10:21:10.359920   11756 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 10:21:10.775527   11756 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 10:21:11.361158   11756 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 10:21:11.361970   11756 kubeadm.go:310] 
	I0930 10:21:11.362066   11756 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 10:21:11.362078   11756 kubeadm.go:310] 
	I0930 10:21:11.362179   11756 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 10:21:11.362189   11756 kubeadm.go:310] 
	I0930 10:21:11.362238   11756 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 10:21:11.362325   11756 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 10:21:11.362408   11756 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 10:21:11.362417   11756 kubeadm.go:310] 
	I0930 10:21:11.362512   11756 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 10:21:11.362538   11756 kubeadm.go:310] 
	I0930 10:21:11.362622   11756 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 10:21:11.362632   11756 kubeadm.go:310] 
	I0930 10:21:11.362706   11756 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 10:21:11.362829   11756 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 10:21:11.362926   11756 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 10:21:11.362935   11756 kubeadm.go:310] 
	I0930 10:21:11.363042   11756 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 10:21:11.363148   11756 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 10:21:11.363157   11756 kubeadm.go:310] 
	I0930 10:21:11.363265   11756 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f4zrib.168z9svcsibzej7f \
	I0930 10:21:11.363392   11756 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2019e7f4cdd44306d6ad5bfe800e8b084e0bcb230a13ba581c51d5d41d39980c \
	I0930 10:21:11.363427   11756 kubeadm.go:310] 	--control-plane 
	I0930 10:21:11.363435   11756 kubeadm.go:310] 
	I0930 10:21:11.363536   11756 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 10:21:11.363545   11756 kubeadm.go:310] 
	I0930 10:21:11.363628   11756 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f4zrib.168z9svcsibzej7f \
	I0930 10:21:11.363726   11756 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2019e7f4cdd44306d6ad5bfe800e8b084e0bcb230a13ba581c51d5d41d39980c 
	I0930 10:21:11.365434   11756 kubeadm.go:310] W0930 10:21:02.236453    1926 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 10:21:11.365694   11756 kubeadm.go:310] W0930 10:21:02.237051    1926 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 10:21:11.365892   11756 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0930 10:21:11.366060   11756 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 10:21:11.366091   11756 cni.go:84] Creating CNI manager for ""
	I0930 10:21:11.366112   11756 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 10:21:11.367633   11756 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 10:21:11.368808   11756 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 10:21:11.376705   11756 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 10:21:11.392399   11756 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 10:21:11.392505   11756 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:11.392528   11756 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-485025 minikube.k8s.io/updated_at=2024_09_30T10_21_11_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=addons-485025 minikube.k8s.io/primary=true
	I0930 10:21:11.399034   11756 ops.go:34] apiserver oom_adj: -16
	I0930 10:21:11.463873   11756 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:11.964838   11756 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:12.464447   11756 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:12.964756   11756 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:13.464510   11756 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:13.964073   11756 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:14.464665   11756 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:14.964446   11756 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:15.463961   11756 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:15.964053   11756 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:16.022888   11756 kubeadm.go:1113] duration metric: took 4.630450401s to wait for elevateKubeSystemPrivileges
	I0930 10:21:16.022926   11756 kubeadm.go:394] duration metric: took 13.910246458s to StartCluster
	I0930 10:21:16.022942   11756 settings.go:142] acquiring lock: {Name:mk72471fb7cd04ec1061860566590829aa9a0fa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:16.023038   11756 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19734-3685/kubeconfig
	I0930 10:21:16.023352   11756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3685/kubeconfig: {Name:mkb502c0ffb71c0d28ec7c189a15446320a80b42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:16.023509   11756 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0930 10:21:16.023554   11756 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 10:21:16.023621   11756 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0930 10:21:16.023737   11756 addons.go:69] Setting yakd=true in profile "addons-485025"
	I0930 10:21:16.023759   11756 addons.go:234] Setting addon yakd=true in "addons-485025"
	I0930 10:21:16.023754   11756 addons.go:69] Setting ingress=true in profile "addons-485025"
	I0930 10:21:16.023792   11756 host.go:66] Checking if "addons-485025" exists ...
	I0930 10:21:16.023796   11756 addons.go:234] Setting addon ingress=true in "addons-485025"
	I0930 10:21:16.023801   11756 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-485025"
	I0930 10:21:16.023800   11756 addons.go:69] Setting inspektor-gadget=true in profile "addons-485025"
	I0930 10:21:16.023817   11756 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-485025"
	I0930 10:21:16.023827   11756 addons.go:234] Setting addon inspektor-gadget=true in "addons-485025"
	I0930 10:21:16.023830   11756 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-485025"
	I0930 10:21:16.023833   11756 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-485025"
	I0930 10:21:16.023843   11756 host.go:66] Checking if "addons-485025" exists ...
	I0930 10:21:16.023856   11756 host.go:66] Checking if "addons-485025" exists ...
	I0930 10:21:16.023867   11756 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-485025"
	I0930 10:21:16.023881   11756 addons.go:69] Setting storage-provisioner=true in profile "addons-485025"
	I0930 10:21:16.023899   11756 addons.go:234] Setting addon storage-provisioner=true in "addons-485025"
	I0930 10:21:16.023908   11756 addons.go:69] Setting cloud-spanner=true in profile "addons-485025"
	I0930 10:21:16.023920   11756 addons.go:234] Setting addon cloud-spanner=true in "addons-485025"
	I0930 10:21:16.023924   11756 host.go:66] Checking if "addons-485025" exists ...
	I0930 10:21:16.023938   11756 host.go:66] Checking if "addons-485025" exists ...
	I0930 10:21:16.024151   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Status}}
	I0930 10:21:16.024279   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Status}}
	I0930 10:21:16.024306   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Status}}
	I0930 10:21:16.024341   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Status}}
	I0930 10:21:16.024373   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Status}}
	I0930 10:21:16.023862   11756 addons.go:69] Setting default-storageclass=true in profile "addons-485025"
	I0930 10:21:16.024383   11756 addons.go:69] Setting gcp-auth=true in profile "addons-485025"
	I0930 10:21:16.024433   11756 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-485025"
	I0930 10:21:16.024439   11756 mustload.go:65] Loading cluster: addons-485025
	I0930 10:21:16.023791   11756 addons.go:69] Setting metrics-server=true in profile "addons-485025"
	I0930 10:21:16.024510   11756 addons.go:234] Setting addon metrics-server=true in "addons-485025"
	I0930 10:21:16.024532   11756 host.go:66] Checking if "addons-485025" exists ...
	I0930 10:21:16.024599   11756 config.go:182] Loaded profile config "addons-485025": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 10:21:16.024700   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Status}}
	I0930 10:21:16.023797   11756 config.go:182] Loaded profile config "addons-485025": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 10:21:16.024403   11756 addons.go:69] Setting ingress-dns=true in profile "addons-485025"
	I0930 10:21:16.024770   11756 addons.go:234] Setting addon ingress-dns=true in "addons-485025"
	I0930 10:21:16.024800   11756 host.go:66] Checking if "addons-485025" exists ...
	I0930 10:21:16.024821   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Status}}
	I0930 10:21:16.024984   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Status}}
	I0930 10:21:16.025271   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Status}}
	I0930 10:21:16.023768   11756 addons.go:69] Setting registry=true in profile "addons-485025"
	I0930 10:21:16.025545   11756 addons.go:234] Setting addon registry=true in "addons-485025"
	I0930 10:21:16.025585   11756 host.go:66] Checking if "addons-485025" exists ...
	I0930 10:21:16.026065   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Status}}
	I0930 10:21:16.023819   11756 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-485025"
	I0930 10:21:16.027061   11756 host.go:66] Checking if "addons-485025" exists ...
	I0930 10:21:16.024396   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Status}}
	I0930 10:21:16.024414   11756 addons.go:69] Setting volcano=true in profile "addons-485025"
	I0930 10:21:16.030592   11756 addons.go:234] Setting addon volcano=true in "addons-485025"
	I0930 10:21:16.030689   11756 host.go:66] Checking if "addons-485025" exists ...
	I0930 10:21:16.030802   11756 out.go:177] * Verifying Kubernetes components...
	I0930 10:21:16.024416   11756 addons.go:69] Setting volumesnapshots=true in profile "addons-485025"
	I0930 10:21:16.031623   11756 addons.go:234] Setting addon volumesnapshots=true in "addons-485025"
	I0930 10:21:16.031674   11756 host.go:66] Checking if "addons-485025" exists ...
	I0930 10:21:16.032125   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Status}}
	I0930 10:21:16.023900   11756 host.go:66] Checking if "addons-485025" exists ...
	I0930 10:21:16.032671   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Status}}
	I0930 10:21:16.032849   11756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:21:16.033192   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Status}}
	I0930 10:21:16.057124   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Status}}
	I0930 10:21:16.074152   11756 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0930 10:21:16.074954   11756 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 10:21:16.075759   11756 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 10:21:16.075782   11756 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 10:21:16.075862   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:21:16.076640   11756 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 10:21:16.076658   11756 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 10:21:16.076712   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:21:16.081840   11756 addons.go:234] Setting addon default-storageclass=true in "addons-485025"
	I0930 10:21:16.081884   11756 host.go:66] Checking if "addons-485025" exists ...
	I0930 10:21:16.082319   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Status}}
	I0930 10:21:16.082744   11756 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0930 10:21:16.084865   11756 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0930 10:21:16.084891   11756 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0930 10:21:16.084944   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:21:16.085355   11756 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-485025"
	I0930 10:21:16.085408   11756 host.go:66] Checking if "addons-485025" exists ...
	I0930 10:21:16.085936   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Status}}
	I0930 10:21:16.090976   11756 host.go:66] Checking if "addons-485025" exists ...
	I0930 10:21:16.095375   11756 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0930 10:21:16.096367   11756 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0930 10:21:16.096447   11756 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0930 10:21:16.096464   11756 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0930 10:21:16.096523   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:21:16.097487   11756 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 10:21:16.097508   11756 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0930 10:21:16.097560   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:21:16.112714   11756 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0930 10:21:16.114984   11756 out.go:177]   - Using image docker.io/registry:2.8.3
	I0930 10:21:16.116579   11756 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0930 10:21:16.116604   11756 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0930 10:21:16.116739   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:21:16.123296   11756 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:21:16.128447   11756 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 10:21:16.128474   11756 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 10:21:16.128532   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:21:16.130281   11756 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:21:16.131789   11756 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0930 10:21:16.133184   11756 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 10:21:16.133207   11756 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0930 10:21:16.133263   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:21:16.138718   11756 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0930 10:21:16.139765   11756 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0930 10:21:16.139783   11756 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0930 10:21:16.139808   11756 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0930 10:21:16.139857   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:21:16.141977   11756 out.go:177]   - Using image docker.io/busybox:stable
	I0930 10:21:16.143281   11756 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 10:21:16.143300   11756 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0930 10:21:16.143351   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:21:16.146004   11756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa Username:docker}
	I0930 10:21:16.146530   11756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa Username:docker}
	I0930 10:21:16.147825   11756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa Username:docker}
	I0930 10:21:16.149089   11756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa Username:docker}
	I0930 10:21:16.151912   11756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa Username:docker}
	I0930 10:21:16.164479   11756 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0930 10:21:16.164550   11756 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0930 10:21:16.166611   11756 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 10:21:16.166626   11756 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0930 10:21:16.166681   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:21:16.166841   11756 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0930 10:21:16.168301   11756 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0930 10:21:16.170798   11756 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0930 10:21:16.170821   11756 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0930 10:21:16.170870   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:21:16.170963   11756 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0930 10:21:16.172529   11756 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0930 10:21:16.173576   11756 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0930 10:21:16.174843   11756 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0930 10:21:16.176145   11756 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0930 10:21:16.177434   11756 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0930 10:21:16.178455   11756 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0930 10:21:16.179540   11756 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0930 10:21:16.180683   11756 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0930 10:21:16.180705   11756 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0930 10:21:16.180759   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:21:16.181121   11756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa Username:docker}
	I0930 10:21:16.181281   11756 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0930 10:21:16.186298   11756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa Username:docker}
	I0930 10:21:16.186541   11756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa Username:docker}
	I0930 10:21:16.188289   11756 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0930 10:21:16.188303   11756 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0930 10:21:16.188407   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:21:16.199205   11756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa Username:docker}
	I0930 10:21:16.204726   11756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa Username:docker}
	I0930 10:21:16.204731   11756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa Username:docker}
	I0930 10:21:16.214366   11756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa Username:docker}
	I0930 10:21:16.218319   11756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa Username:docker}
	I0930 10:21:16.220951   11756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa Username:docker}
	W0930 10:21:16.252637   11756 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0930 10:21:16.252669   11756 retry.go:31] will retry after 232.123848ms: ssh: handshake failed: EOF
	I0930 10:21:16.451637   11756 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0930 10:21:16.455002   11756 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 10:21:16.550773   11756 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0930 10:21:16.550797   11756 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0930 10:21:16.573602   11756 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 10:21:16.573692   11756 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0930 10:21:16.574147   11756 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0930 10:21:16.574166   11756 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0930 10:21:16.649864   11756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 10:21:16.658600   11756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 10:21:16.762605   11756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 10:21:16.764584   11756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 10:21:16.765559   11756 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0930 10:21:16.765655   11756 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0930 10:21:16.771310   11756 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0930 10:21:16.771365   11756 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0930 10:21:16.850159   11756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 10:21:16.851243   11756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0930 10:21:16.861569   11756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 10:21:16.865447   11756 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0930 10:21:16.865497   11756 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0930 10:21:16.954593   11756 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0930 10:21:16.954635   11756 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0930 10:21:16.954905   11756 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 10:21:16.954926   11756 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 10:21:16.958759   11756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0930 10:21:17.061614   11756 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0930 10:21:17.061703   11756 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0930 10:21:17.072970   11756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0930 10:21:17.170637   11756 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0930 10:21:17.170717   11756 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0930 10:21:17.256054   11756 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0930 10:21:17.256086   11756 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0930 10:21:17.265029   11756 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 10:21:17.265055   11756 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 10:21:17.351890   11756 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0930 10:21:17.351921   11756 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0930 10:21:17.450182   11756 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0930 10:21:17.450208   11756 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0930 10:21:17.470612   11756 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0930 10:21:17.470689   11756 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0930 10:21:17.754449   11756 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0930 10:21:17.754489   11756 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0930 10:21:17.855454   11756 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0930 10:21:17.855483   11756 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0930 10:21:17.966172   11756 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0930 10:21:17.966202   11756 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0930 10:21:18.049700   11756 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0930 10:21:18.049741   11756 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0930 10:21:18.056418   11756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 10:21:18.157036   11756 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.705336074s)
	I0930 10:21:18.157133   11756 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0930 10:21:18.157334   11756 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.702306884s)
	I0930 10:21:18.159295   11756 node_ready.go:35] waiting up to 6m0s for node "addons-485025" to be "Ready" ...
	I0930 10:21:18.163729   11756 node_ready.go:49] node "addons-485025" has status "Ready":"True"
	I0930 10:21:18.163751   11756 node_ready.go:38] duration metric: took 4.387973ms for node "addons-485025" to be "Ready" ...
	I0930 10:21:18.163760   11756 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 10:21:18.173013   11756 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qxcw9" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:18.257187   11756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0930 10:21:18.369427   11756 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:21:18.369518   11756 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0930 10:21:18.550480   11756 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0930 10:21:18.550583   11756 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0930 10:21:18.663508   11756 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-485025" context rescaled to 1 replicas
	I0930 10:21:18.765168   11756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:21:18.949982   11756 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0930 10:21:18.950009   11756 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0930 10:21:19.452593   11756 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0930 10:21:19.452631   11756 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0930 10:21:19.567372   11756 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0930 10:21:19.567463   11756 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0930 10:21:19.663756   11756 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I0930 10:21:19.663833   11756 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I0930 10:21:19.852285   11756 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0930 10:21:19.852402   11756 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0930 10:21:20.258294   11756 pod_ready.go:103] pod "coredns-7c65d6cfc9-qxcw9" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:20.259118   11756 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 10:21:20.259141   11756 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I0930 10:21:20.350487   11756 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.700581366s)
	I0930 10:21:20.350614   11756 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.691990695s)
	I0930 10:21:20.356247   11756 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0930 10:21:20.356272   11756 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0930 10:21:20.562181   11756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 10:21:20.953868   11756 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0930 10:21:20.953957   11756 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0930 10:21:21.271931   11756 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0930 10:21:21.271957   11756 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0930 10:21:21.556140   11756 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0930 10:21:21.556166   11756 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0930 10:21:21.950013   11756 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 10:21:21.950036   11756 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0930 10:21:22.149983   11756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 10:21:22.267672   11756 pod_ready.go:103] pod "coredns-7c65d6cfc9-qxcw9" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:23.154948   11756 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0930 10:21:23.155088   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:21:23.182168   11756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa Username:docker}
	I0930 10:21:23.967250   11756 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0930 10:21:24.251183   11756 addons.go:234] Setting addon gcp-auth=true in "addons-485025"
	I0930 10:21:24.251263   11756 host.go:66] Checking if "addons-485025" exists ...
	I0930 10:21:24.252873   11756 cli_runner.go:164] Run: docker container inspect addons-485025 --format={{.State.Status}}
	I0930 10:21:24.280611   11756 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0930 10:21:24.280671   11756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-485025
	I0930 10:21:24.297658   11756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/addons-485025/id_rsa Username:docker}
	I0930 10:21:24.758471   11756 pod_ready.go:103] pod "coredns-7c65d6cfc9-qxcw9" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:25.461056   11756 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.698408037s)
	I0930 10:21:25.461099   11756 addons.go:475] Verifying addon ingress=true in "addons-485025"
	I0930 10:21:25.461272   11756 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.610004912s)
	I0930 10:21:25.461331   11756 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.599739493s)
	I0930 10:21:25.461177   11756 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.69651338s)
	I0930 10:21:25.461207   11756 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.611021267s)
	I0930 10:21:25.463439   11756 out.go:177] * Verifying ingress addon...
	I0930 10:21:25.467489   11756 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0930 10:21:25.470700   11756 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0930 10:21:25.471964   11756 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0930 10:21:25.471985   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:25.972182   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:26.474483   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:27.062244   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:27.254092   11756 pod_ready.go:103] pod "coredns-7c65d6cfc9-qxcw9" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:27.473674   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:27.971384   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:28.257250   11756 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.298459614s)
	I0930 10:21:28.257402   11756 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.184384551s)
	I0930 10:21:28.257435   11756 addons.go:475] Verifying addon registry=true in "addons-485025"
	I0930 10:21:28.257675   11756 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.201171881s)
	I0930 10:21:28.257772   11756 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.000472239s)
	I0930 10:21:28.257982   11756 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.492778905s)
	W0930 10:21:28.258014   11756 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 10:21:28.258031   11756 addons.go:475] Verifying addon metrics-server=true in "addons-485025"
	I0930 10:21:28.258034   11756 retry.go:31] will retry after 140.689133ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 10:21:28.258163   11756 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.695934948s)
	I0930 10:21:28.258964   11756 out.go:177] * Verifying registry addon...
	I0930 10:21:28.259734   11756 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-485025 service yakd-dashboard -n yakd-dashboard
	
	I0930 10:21:28.261464   11756 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0930 10:21:28.264480   11756 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0930 10:21:28.264502   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:28.399212   11756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:21:28.550088   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:28.767098   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:29.050421   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:29.150883   11756 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.870247124s)
	I0930 10:21:29.150799   11756 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.000764788s)
	I0930 10:21:29.151121   11756 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-485025"
	I0930 10:21:29.152580   11756 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0930 10:21:29.152682   11756 out.go:177] * Verifying csi-hostpath-driver addon...
	I0930 10:21:29.154390   11756 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:21:29.155412   11756 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0930 10:21:29.155846   11756 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0930 10:21:29.155869   11756 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0930 10:21:29.162273   11756 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0930 10:21:29.162298   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:29.256658   11756 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0930 10:21:29.256696   11756 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0930 10:21:29.265406   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:29.281269   11756 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 10:21:29.281289   11756 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0930 10:21:29.551360   11756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 10:21:29.552282   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:29.660487   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:29.678965   11756 pod_ready.go:103] pod "coredns-7c65d6cfc9-qxcw9" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:29.765779   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:29.972072   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:30.161901   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:30.265251   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:30.474119   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:30.661254   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:30.675060   11756 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-qxcw9" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-qxcw9" not found
	I0930 10:21:30.675089   11756 pod_ready.go:82] duration metric: took 12.502049818s for pod "coredns-7c65d6cfc9-qxcw9" in "kube-system" namespace to be "Ready" ...
	E0930 10:21:30.675101   11756 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-qxcw9" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-qxcw9" not found
	I0930 10:21:30.675112   11756 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vdjlp" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:30.765443   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:30.953009   11756 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.553752045s)
	I0930 10:21:30.975581   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:30.979753   11756 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.428345524s)
	I0930 10:21:30.982008   11756 addons.go:475] Verifying addon gcp-auth=true in "addons-485025"
	I0930 10:21:30.985105   11756 out.go:177] * Verifying gcp-auth addon...
	I0930 10:21:30.987192   11756 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0930 10:21:31.075163   11756 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0930 10:21:31.160020   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:31.265321   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:31.471817   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:31.659260   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:31.765247   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:31.971255   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:32.173518   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:32.273120   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:32.471688   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:32.660429   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:32.681450   11756 pod_ready.go:103] pod "coredns-7c65d6cfc9-vdjlp" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:32.765804   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:32.971252   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:33.162033   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:33.265064   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:33.471821   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:33.660370   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:33.765521   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:33.972052   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:34.159398   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:34.265376   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:34.471627   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:34.659442   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:34.765346   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:34.972079   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:35.159616   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:35.180466   11756 pod_ready.go:103] pod "coredns-7c65d6cfc9-vdjlp" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:35.265120   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:35.471269   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:35.659702   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:35.766832   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:35.972908   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:36.160268   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:36.264957   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:36.471364   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:36.660353   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:36.765637   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:36.971546   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:37.160096   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:37.265141   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:37.471305   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:37.660262   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:37.681258   11756 pod_ready.go:103] pod "coredns-7c65d6cfc9-vdjlp" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:37.765314   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:37.972074   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:38.160196   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:38.264832   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:38.472157   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:38.659972   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:38.764925   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:38.971425   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:39.159791   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:39.265679   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:39.472080   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:39.659760   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:39.765620   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:40.053567   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:40.295107   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:40.295619   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:40.297050   11756 pod_ready.go:103] pod "coredns-7c65d6cfc9-vdjlp" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:40.471292   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:40.659771   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:40.766619   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:40.971614   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:41.159390   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:41.264729   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:41.472405   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:41.660298   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:41.765592   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:41.972403   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:42.160020   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:42.264711   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:42.472451   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:42.660424   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:42.680056   11756 pod_ready.go:103] pod "coredns-7c65d6cfc9-vdjlp" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:42.765124   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:43.024211   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:43.159336   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:43.265524   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:43.471873   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:43.662094   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:43.765621   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:43.971636   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:44.160737   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:44.264827   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:44.471409   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:44.660421   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:44.680630   11756 pod_ready.go:103] pod "coredns-7c65d6cfc9-vdjlp" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:44.765438   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:44.971864   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:45.159873   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:45.264169   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:45.471122   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:45.660682   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:45.765509   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:45.971946   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:46.160387   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:46.265577   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:46.472054   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:46.660157   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:46.681205   11756 pod_ready.go:103] pod "coredns-7c65d6cfc9-vdjlp" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:46.765253   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:46.971820   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:47.160742   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:47.264419   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:47.471562   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:47.660357   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:47.765913   11756 kapi.go:107] duration metric: took 19.504444181s to wait for kubernetes.io/minikube-addons=registry ...
	I0930 10:21:47.972616   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:48.160238   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:48.472084   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:48.660875   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:48.972525   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:49.160495   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:49.179986   11756 pod_ready.go:103] pod "coredns-7c65d6cfc9-vdjlp" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:49.578502   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:49.680065   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:49.971580   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:50.160474   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:50.180548   11756 pod_ready.go:93] pod "coredns-7c65d6cfc9-vdjlp" in "kube-system" namespace has status "Ready":"True"
	I0930 10:21:50.180572   11756 pod_ready.go:82] duration metric: took 19.50545169s for pod "coredns-7c65d6cfc9-vdjlp" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:50.180584   11756 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-485025" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:50.184893   11756 pod_ready.go:93] pod "etcd-addons-485025" in "kube-system" namespace has status "Ready":"True"
	I0930 10:21:50.184916   11756 pod_ready.go:82] duration metric: took 4.322019ms for pod "etcd-addons-485025" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:50.184928   11756 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-485025" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:50.189262   11756 pod_ready.go:93] pod "kube-apiserver-addons-485025" in "kube-system" namespace has status "Ready":"True"
	I0930 10:21:50.189284   11756 pod_ready.go:82] duration metric: took 4.347746ms for pod "kube-apiserver-addons-485025" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:50.189295   11756 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-485025" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:50.193205   11756 pod_ready.go:93] pod "kube-controller-manager-addons-485025" in "kube-system" namespace has status "Ready":"True"
	I0930 10:21:50.193223   11756 pod_ready.go:82] duration metric: took 3.920446ms for pod "kube-controller-manager-addons-485025" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:50.193233   11756 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r4dfl" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:50.197200   11756 pod_ready.go:93] pod "kube-proxy-r4dfl" in "kube-system" namespace has status "Ready":"True"
	I0930 10:21:50.197220   11756 pod_ready.go:82] duration metric: took 3.980012ms for pod "kube-proxy-r4dfl" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:50.197231   11756 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-485025" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:50.471444   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:50.578162   11756 pod_ready.go:93] pod "kube-scheduler-addons-485025" in "kube-system" namespace has status "Ready":"True"
	I0930 10:21:50.578188   11756 pod_ready.go:82] duration metric: took 380.947977ms for pod "kube-scheduler-addons-485025" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:50.578199   11756 pod_ready.go:39] duration metric: took 32.414407889s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 10:21:50.578224   11756 api_server.go:52] waiting for apiserver process to appear ...
	I0930 10:21:50.578287   11756 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:21:50.594523   11756 api_server.go:72] duration metric: took 34.570935152s to wait for apiserver process to appear ...
	I0930 10:21:50.594558   11756 api_server.go:88] waiting for apiserver healthz status ...
	I0930 10:21:50.594583   11756 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0930 10:21:50.598673   11756 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0930 10:21:50.599623   11756 api_server.go:141] control plane version: v1.31.1
	I0930 10:21:50.599649   11756 api_server.go:131] duration metric: took 5.083695ms to wait for apiserver health ...
	I0930 10:21:50.599658   11756 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 10:21:50.659891   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:50.782645   11756 system_pods.go:59] 17 kube-system pods found
	I0930 10:21:50.782674   11756 system_pods.go:61] "coredns-7c65d6cfc9-vdjlp" [8972b887-927f-4352-9193-7055c500efb6] Running
	I0930 10:21:50.782684   11756 system_pods.go:61] "csi-hostpath-attacher-0" [310b2d52-6848-45f4-94f7-76b5950ff8c7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0930 10:21:50.782690   11756 system_pods.go:61] "csi-hostpath-resizer-0" [6f8e1b1a-be75-49ee-9d72-20cbc0eb9056] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0930 10:21:50.782702   11756 system_pods.go:61] "csi-hostpathplugin-sl6b6" [fb1fc1ba-92b9-4e1d-8888-4233bccc7032] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0930 10:21:50.782708   11756 system_pods.go:61] "etcd-addons-485025" [a1e2dffc-229f-4f2a-9fd7-b616efee8e08] Running
	I0930 10:21:50.782714   11756 system_pods.go:61] "kube-apiserver-addons-485025" [817b4bf7-c1c1-4252-90c6-61c314af5c76] Running
	I0930 10:21:50.782722   11756 system_pods.go:61] "kube-controller-manager-addons-485025" [df8f60bc-4425-4584-95dd-2d4cd904bd83] Running
	I0930 10:21:50.782732   11756 system_pods.go:61] "kube-ingress-dns-minikube" [3f251512-e5ef-4716-9e29-0eb39032d93f] Running
	I0930 10:21:50.782737   11756 system_pods.go:61] "kube-proxy-r4dfl" [1f81e52d-d695-4f77-81b5-fb2fd1a5a7c6] Running
	I0930 10:21:50.782745   11756 system_pods.go:61] "kube-scheduler-addons-485025" [01d104cb-ac63-4542-ac3c-793c7246b186] Running
	I0930 10:21:50.782752   11756 system_pods.go:61] "metrics-server-84c5f94fbc-kvtbh" [213313da-61ef-4454-a082-5c64f6fad3d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 10:21:50.782757   11756 system_pods.go:61] "nvidia-device-plugin-daemonset-5zsrh" [1716d456-7b54-4982-b487-8bf11f302e7f] Running
	I0930 10:21:50.782764   11756 system_pods.go:61] "registry-66c9cd494c-9bg4w" [da79db35-9dbe-40b6-bc10-153757b8bf2a] Running
	I0930 10:21:50.782767   11756 system_pods.go:61] "registry-proxy-8lrkc" [0863352b-681f-45ef-a925-ee3ba3eb1198] Running
	I0930 10:21:50.782773   11756 system_pods.go:61] "snapshot-controller-56fcc65765-224qv" [a09a0400-f6a3-4cda-953a-7e5738bdf97f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 10:21:50.782781   11756 system_pods.go:61] "snapshot-controller-56fcc65765-gqqd4" [5759f1d0-838c-4956-8358-dce84db71f65] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 10:21:50.782787   11756 system_pods.go:61] "storage-provisioner" [1ac3769e-d2e9-40d1-8c38-c2494ba4e962] Running
	I0930 10:21:50.782795   11756 system_pods.go:74] duration metric: took 183.129605ms to wait for pod list to return data ...
	I0930 10:21:50.782806   11756 default_sa.go:34] waiting for default service account to be created ...
	I0930 10:21:50.972220   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:50.977785   11756 default_sa.go:45] found service account: "default"
	I0930 10:21:50.977814   11756 default_sa.go:55] duration metric: took 194.998219ms for default service account to be created ...
	I0930 10:21:50.977824   11756 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 10:21:51.160181   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:51.256921   11756 system_pods.go:86] 17 kube-system pods found
	I0930 10:21:51.256959   11756 system_pods.go:89] "coredns-7c65d6cfc9-vdjlp" [8972b887-927f-4352-9193-7055c500efb6] Running
	I0930 10:21:51.256975   11756 system_pods.go:89] "csi-hostpath-attacher-0" [310b2d52-6848-45f4-94f7-76b5950ff8c7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0930 10:21:51.256986   11756 system_pods.go:89] "csi-hostpath-resizer-0" [6f8e1b1a-be75-49ee-9d72-20cbc0eb9056] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0930 10:21:51.256997   11756 system_pods.go:89] "csi-hostpathplugin-sl6b6" [fb1fc1ba-92b9-4e1d-8888-4233bccc7032] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0930 10:21:51.257007   11756 system_pods.go:89] "etcd-addons-485025" [a1e2dffc-229f-4f2a-9fd7-b616efee8e08] Running
	I0930 10:21:51.257015   11756 system_pods.go:89] "kube-apiserver-addons-485025" [817b4bf7-c1c1-4252-90c6-61c314af5c76] Running
	I0930 10:21:51.257022   11756 system_pods.go:89] "kube-controller-manager-addons-485025" [df8f60bc-4425-4584-95dd-2d4cd904bd83] Running
	I0930 10:21:51.257032   11756 system_pods.go:89] "kube-ingress-dns-minikube" [3f251512-e5ef-4716-9e29-0eb39032d93f] Running
	I0930 10:21:51.257037   11756 system_pods.go:89] "kube-proxy-r4dfl" [1f81e52d-d695-4f77-81b5-fb2fd1a5a7c6] Running
	I0930 10:21:51.257042   11756 system_pods.go:89] "kube-scheduler-addons-485025" [01d104cb-ac63-4542-ac3c-793c7246b186] Running
	I0930 10:21:51.257056   11756 system_pods.go:89] "metrics-server-84c5f94fbc-kvtbh" [213313da-61ef-4454-a082-5c64f6fad3d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 10:21:51.257070   11756 system_pods.go:89] "nvidia-device-plugin-daemonset-5zsrh" [1716d456-7b54-4982-b487-8bf11f302e7f] Running
	I0930 10:21:51.257084   11756 system_pods.go:89] "registry-66c9cd494c-9bg4w" [da79db35-9dbe-40b6-bc10-153757b8bf2a] Running
	I0930 10:21:51.257089   11756 system_pods.go:89] "registry-proxy-8lrkc" [0863352b-681f-45ef-a925-ee3ba3eb1198] Running
	I0930 10:21:51.257103   11756 system_pods.go:89] "snapshot-controller-56fcc65765-224qv" [a09a0400-f6a3-4cda-953a-7e5738bdf97f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 10:21:51.257112   11756 system_pods.go:89] "snapshot-controller-56fcc65765-gqqd4" [5759f1d0-838c-4956-8358-dce84db71f65] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 10:21:51.257125   11756 system_pods.go:89] "storage-provisioner" [1ac3769e-d2e9-40d1-8c38-c2494ba4e962] Running
	I0930 10:21:51.257135   11756 system_pods.go:126] duration metric: took 279.304553ms to wait for k8s-apps to be running ...
	I0930 10:21:51.257149   11756 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 10:21:51.257198   11756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:21:51.349033   11756 system_svc.go:56] duration metric: took 91.866671ms WaitForService to wait for kubelet
	I0930 10:21:51.349069   11756 kubeadm.go:582] duration metric: took 35.325485832s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 10:21:51.349092   11756 node_conditions.go:102] verifying NodePressure condition ...
	I0930 10:21:51.379561   11756 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0930 10:21:51.379587   11756 node_conditions.go:123] node cpu capacity is 8
	I0930 10:21:51.379599   11756 node_conditions.go:105] duration metric: took 30.502511ms to run NodePressure ...
	I0930 10:21:51.379610   11756 start.go:241] waiting for startup goroutines ...
	I0930 10:21:51.472446   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:51.659891   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:51.972297   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:52.159997   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:52.471462   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:52.660696   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:52.972077   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:53.159721   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:53.470863   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:53.658953   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:53.972179   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:54.159005   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:54.472208   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:54.660195   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:55.053668   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:55.159681   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:55.471836   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:55.660882   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:55.973638   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:56.160706   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:56.482830   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:56.684421   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:56.972421   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:57.160909   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:57.471399   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:57.660476   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:57.972270   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:58.159825   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:58.472034   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:58.659813   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:58.972635   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:59.160102   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:59.472243   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:59.660085   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:59.971959   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:00.160451   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:00.471556   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:00.660087   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:00.971444   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:01.253233   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:01.471994   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:01.660395   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:01.972550   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:02.160348   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:02.471020   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:02.673169   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:02.990065   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:03.159829   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:03.484014   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:03.686507   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:03.971609   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:04.160076   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:04.471715   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:04.673558   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:04.972673   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:05.160372   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:05.472200   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:05.660105   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:05.972157   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:06.160414   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:06.471887   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:06.660686   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:06.971767   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:07.160779   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:07.471232   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:07.660463   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:07.972152   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:08.160374   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:08.472312   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:08.659442   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:08.972385   11756 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:09.160547   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:09.472257   11756 kapi.go:107] duration metric: took 44.004768698s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0930 10:22:09.660300   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:10.193545   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:10.661983   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:11.159374   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:11.659991   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:12.167241   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:12.660924   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:13.160685   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:13.660007   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:14.160648   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:14.660098   11756 kapi.go:107] duration metric: took 45.504684757s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0930 10:22:54.490077   11756 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0930 10:22:54.490095   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:54.990471   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:55.490239   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:55.990448   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:56.490723   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:56.990789   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:57.490734   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:57.991056   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:58.491240   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:58.989879   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:59.490717   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:59.990755   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:00.490456   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:00.990631   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:01.490507   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:01.991194   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:02.490197   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:02.991280   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:03.490853   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:03.990785   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:04.490916   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:04.990874   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:05.490680   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:05.990426   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:06.490540   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:06.990331   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:07.490439   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:07.990223   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:08.490619   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:08.990437   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:09.490084   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:09.992284   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:10.490124   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:10.989670   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:11.490619   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:11.990846   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:12.490921   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:12.991319   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:13.490194   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:13.990034   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:14.490773   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:14.990931   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:15.490773   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:15.990539   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:16.490927   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:16.991088   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:17.489955   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:17.990840   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:18.491023   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:18.989849   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:19.490611   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:19.990609   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:20.490301   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:20.990361   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:21.490433   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:21.990401   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:22.490372   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:22.989989   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:23.489972   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:23.990880   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:24.491181   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:24.990051   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:25.489736   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:25.991011   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:26.490159   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:26.990126   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:27.490421   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:27.990248   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:28.490656   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:28.990547   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:29.490267   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:29.990371   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:30.490164   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:30.989901   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:31.490892   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:31.990731   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:32.490967   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:32.990571   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:33.491003   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:33.990803   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:34.489961   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:34.991332   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:35.489923   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:35.990729   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:36.490751   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:36.990331   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:37.490281   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:37.991055   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:38.490032   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:38.990534   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:39.490261   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:39.990431   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:40.490388   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:40.990498   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:41.490255   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:41.990322   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:42.490406   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:42.990256   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:43.490828   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:43.990669   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:44.490709   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:44.990307   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:45.489820   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:45.990644   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:46.490979   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:46.991353   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:47.490297   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:47.990178   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:48.490475   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:48.990342   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:49.490087   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:49.991445   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:50.490210   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:50.989999   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:51.490913   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:51.990971   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:52.491040   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:52.990064   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:53.490183   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:53.990639   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:54.490553   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:54.990483   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:55.490379   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:55.990378   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:56.490638   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:56.990897   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:57.491344   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:57.990288   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:58.490718   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:58.990559   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:59.490570   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:59.990699   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:00.490625   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:00.990816   11756 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:01.490756   11756 kapi.go:107] duration metric: took 2m30.503563452s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0930 10:24:01.492259   11756 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-485025 cluster.
	I0930 10:24:01.493370   11756 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0930 10:24:01.494567   11756 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0930 10:24:01.495736   11756 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, cloud-spanner, nvidia-device-plugin, storage-provisioner-rancher, volcano, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0930 10:24:01.496938   11756 addons.go:510] duration metric: took 2m45.473326997s for enable addons: enabled=[storage-provisioner ingress-dns cloud-spanner nvidia-device-plugin storage-provisioner-rancher volcano metrics-server inspektor-gadget yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0930 10:24:01.496973   11756 start.go:246] waiting for cluster config update ...
	I0930 10:24:01.496990   11756 start.go:255] writing updated cluster config ...
	I0930 10:24:01.497225   11756 ssh_runner.go:195] Run: rm -f paused
	I0930 10:24:01.543722   11756 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 10:24:01.545380   11756 out.go:177] * Done! kubectl is now configured to use "addons-485025" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 30 10:33:32 addons-485025 dockerd[1345]: time="2024-09-30T10:33:32.452378827Z" level=info msg="ignoring event" container=1b847ea644f063270b61d2b01de8a0822c42ea258b90b3e00508d11616be871c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:32 addons-485025 dockerd[1345]: time="2024-09-30T10:33:32.452439449Z" level=info msg="ignoring event" container=08b845c9bb5e782cca300e8cb53c56dba479de56f0881b2a12abfb3ab2be53ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:32 addons-485025 dockerd[1345]: time="2024-09-30T10:33:32.455424469Z" level=info msg="ignoring event" container=bbbc6abfed283ac036674e0ba0dabb86b03835b5b0cf7dd5b1eea9e4ca1e53c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:32 addons-485025 dockerd[1345]: time="2024-09-30T10:33:32.456738501Z" level=info msg="ignoring event" container=dbdcf17f6890e5f274d40cc8c1c6b3501f319b9d693ec825a9e0a6fefa0b9d6c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:32 addons-485025 dockerd[1345]: time="2024-09-30T10:33:32.458998200Z" level=info msg="ignoring event" container=e77a76f5c0314c4afcd9bcf737532a2d8c45951531305d1e4a36967b547f60f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:32 addons-485025 dockerd[1345]: time="2024-09-30T10:33:32.469923687Z" level=info msg="ignoring event" container=9537e1420f35d694bf58609533cb232bcf785f15c591d961bdfbaae1f931572d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:32 addons-485025 dockerd[1345]: time="2024-09-30T10:33:32.471559377Z" level=info msg="ignoring event" container=437a82de85bd04a61d87ae273433b408faa56a088bba947c29046b7926c2f6e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:32 addons-485025 dockerd[1345]: time="2024-09-30T10:33:32.683089685Z" level=info msg="ignoring event" container=cc2bb30faf9d28752f9197fd587678bad506cd4502b52e778e4d98711ccef73b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:32 addons-485025 dockerd[1345]: time="2024-09-30T10:33:32.751674538Z" level=info msg="ignoring event" container=d3256261bb30beba08777f50b27c12450f0361712b3cc22c2972f27a11770d56 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:32 addons-485025 dockerd[1345]: time="2024-09-30T10:33:32.784303491Z" level=info msg="ignoring event" container=7d5373a54a10c9790233c2aecbdb5b5cfcd4d8e7611dd4e4ff0a60bff5f37749 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:35 addons-485025 dockerd[1345]: time="2024-09-30T10:33:35.706425835Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=a2b6b40366ec7988 traceID=221e631d16b91bb5c2236e1df0a780b3
	Sep 30 10:33:35 addons-485025 dockerd[1345]: time="2024-09-30T10:33:35.708172138Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=a2b6b40366ec7988 traceID=221e631d16b91bb5c2236e1df0a780b3
	Sep 30 10:33:38 addons-485025 dockerd[1345]: time="2024-09-30T10:33:38.698458485Z" level=info msg="ignoring event" container=25a976feb551798a47539ed87a5163bfdf47ca96d484466c768a705e4123f634 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:38 addons-485025 dockerd[1345]: time="2024-09-30T10:33:38.700800438Z" level=info msg="ignoring event" container=7e305b4df57f8fdedc7d7ce55ab88afc5755c2ee4863684228045acf5a54883a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:38 addons-485025 dockerd[1345]: time="2024-09-30T10:33:38.869967424Z" level=info msg="ignoring event" container=f179ca433309ac874fb839c555a04429bb5c9bea35e34f7c46fbf70eee8712b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:38 addons-485025 dockerd[1345]: time="2024-09-30T10:33:38.908388403Z" level=info msg="ignoring event" container=eb251aef0e0c878579c9f96034cc3720e52f37af516796d1ba2c945c59481fb6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:43 addons-485025 dockerd[1345]: time="2024-09-30T10:33:43.777462129Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=10a2f56d9b7c12ff7d5eed873bf0b40d45aad54fc1200d53444ace0ddfc77f52 spanID=f4b1136ecb71efe3 traceID=773cbbac141462d5c43364849e6cb288
	Sep 30 10:33:43 addons-485025 dockerd[1345]: time="2024-09-30T10:33:43.799059936Z" level=info msg="ignoring event" container=10a2f56d9b7c12ff7d5eed873bf0b40d45aad54fc1200d53444ace0ddfc77f52 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:43 addons-485025 dockerd[1345]: time="2024-09-30T10:33:43.906321090Z" level=info msg="ignoring event" container=54e1b7f4f3479773db14b057d84bbeeb4e9a34019d16f6acb4c9156f38bb4239 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:53 addons-485025 dockerd[1345]: time="2024-09-30T10:33:53.225147415Z" level=info msg="ignoring event" container=0dcf2f146ba1673fb88da6a81e12bb5d25aa44a2607fc4dee7305ec5af936194 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:53 addons-485025 dockerd[1345]: time="2024-09-30T10:33:53.672664806Z" level=info msg="ignoring event" container=1a3722819018fefe7024fda3df77dbcf4e0eb732b32ea7395d99c42af465f404 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:53 addons-485025 dockerd[1345]: time="2024-09-30T10:33:53.757185994Z" level=info msg="ignoring event" container=8d55d3c5ad40505cc0f45fcf2bd9701fab556cc9c36c84e605e99195bcbbaf16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:53 addons-485025 dockerd[1345]: time="2024-09-30T10:33:53.814491395Z" level=info msg="ignoring event" container=9b03e0046bf4ac46cb83479b2188636b7f1429175182042fa8901d99b5dc797a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:53 addons-485025 cri-dockerd[1610]: time="2024-09-30T10:33:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-8lrkc_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 30 10:33:53 addons-485025 dockerd[1345]: time="2024-09-30T10:33:53.898844622Z" level=info msg="ignoring event" container=155d1341ce754944df17d703d704ad7a180ae9c3e3b6eafc4ac3901b2caaff43 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                             CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	eaa809e8b9733       a416a98b71e22                                                                                                     41 seconds ago       Exited              helper-pod                0                   e2c12e07c9b01       helper-pod-delete-pvc-c9d28883-8cdc-411a-b481-ed6040da0be1
	f0f0dd4ecfe74       busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140                                   44 seconds ago       Exited              busybox                   0                   6fe0d4de55d7b       test-local-path
	5bc7baea2148e       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                       57 seconds ago       Running             hello-world-app           0                   1b572184a0848       hello-world-app-55bf9c44b4-p6p72
	a9bef864587df       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                     About a minute ago   Running             nginx                     0                   0a188c9ac0bcc       nginx
	d9dbd70a3058a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb      9 minutes ago        Running             gcp-auth                  0                   2dc5a071e38fc       gcp-auth-89d5ffd79-q69gw
	8d55d3c5ad405       gcr.io/k8s-minikube/kube-registry-proxy@sha256:9fd683b2e47c5fded3410c69f414f05cdee737597569f52854347f889b118982   12 minutes ago       Exited              registry-proxy            0                   155d1341ce754       registry-proxy-8lrkc
	570be091a2bb9       6e38f40d628db                                                                                                     12 minutes ago       Running             storage-provisioner       0                   3cc41922d950e       storage-provisioner
	c670266622d75       c69fa2e9cbf5f                                                                                                     12 minutes ago       Running             coredns                   0                   efa3991f5c0e7       coredns-7c65d6cfc9-vdjlp
	0bb3495be0d08       60c005f310ff3                                                                                                     12 minutes ago       Running             kube-proxy                0                   c077f4cb6ff10       kube-proxy-r4dfl
	bf4659e7f16dc       175ffd71cce3d                                                                                                     12 minutes ago       Running             kube-controller-manager   0                   b48241e26c74b       kube-controller-manager-addons-485025
	23877de9b8f78       6bab7719df100                                                                                                     12 minutes ago       Running             kube-apiserver            0                   3f65ac73d15d4       kube-apiserver-addons-485025
	f8a8bdbbb7e99       9aa1fad941575                                                                                                     12 minutes ago       Running             kube-scheduler            0                   c470d7ad6be8a       kube-scheduler-addons-485025
	706fd73224678       2e96e5913fc06                                                                                                     12 minutes ago       Running             etcd                      0                   118577d9fb65b       etcd-addons-485025
	
	
	==> coredns [c670266622d7] <==
	[INFO] 10.244.0.21:49086 - 40113 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006958734s
	[INFO] 10.244.0.21:49086 - 40427 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004765025s
	[INFO] 10.244.0.21:46442 - 37361 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004837492s
	[INFO] 10.244.0.21:33444 - 16585 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004548336s
	[INFO] 10.244.0.21:56303 - 44893 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004596588s
	[INFO] 10.244.0.21:49292 - 12702 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003750609s
	[INFO] 10.244.0.21:59125 - 8484 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005443689s
	[INFO] 10.244.0.21:36417 - 29577 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004987177s
	[INFO] 10.244.0.21:57508 - 12942 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005424184s
	[INFO] 10.244.0.21:56303 - 64659 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004527402s
	[INFO] 10.244.0.21:57508 - 43774 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005032492s
	[INFO] 10.244.0.21:33444 - 52060 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005331649s
	[INFO] 10.244.0.21:59125 - 41716 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005309611s
	[INFO] 10.244.0.21:56303 - 5820 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000076372s
	[INFO] 10.244.0.21:46442 - 22397 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00541669s
	[INFO] 10.244.0.21:49292 - 61737 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005322693s
	[INFO] 10.244.0.21:49086 - 53534 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005078074s
	[INFO] 10.244.0.21:59125 - 22127 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000056091s
	[INFO] 10.244.0.21:36417 - 13024 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00577033s
	[INFO] 10.244.0.21:33444 - 53106 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00008443s
	[INFO] 10.244.0.21:49292 - 14625 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000079109s
	[INFO] 10.244.0.21:57508 - 21381 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000159451s
	[INFO] 10.244.0.21:49086 - 59668 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000180918s
	[INFO] 10.244.0.21:46442 - 31578 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000072124s
	[INFO] 10.244.0.21:36417 - 53996 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000176833s
	
	
	==> describe nodes <==
	Name:               addons-485025
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-485025
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=addons-485025
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T10_21_11_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-485025
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 10:21:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-485025
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 10:33:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 10:33:17 +0000   Mon, 30 Sep 2024 10:21:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 10:33:17 +0000   Mon, 30 Sep 2024 10:21:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 10:33:17 +0000   Mon, 30 Sep 2024 10:21:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 10:33:17 +0000   Mon, 30 Sep 2024 10:21:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-485025
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 38ad125b66ea4ec2802e491bc8685941
	  System UUID:                c6ded2f0-b3bc-45ed-ab92-54568dd3b5e7
	  Boot ID:                    e8f00f6f-835b-4ab0-acbc-ac28d6990f2c
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     hello-world-app-55bf9c44b4-p6p72         0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  gcp-auth                    gcp-auth-89d5ffd79-q69gw                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-vdjlp                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-485025                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-485025             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-485025    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-r4dfl                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-485025             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-485025 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x7 over 12m)  kubelet          Node addons-485025 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-485025 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-485025 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-485025 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-485025 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-485025 event: Registered Node addons-485025 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a bb 72 55 c4 09 08 06
	[Sep30 10:22] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a f9 0b 40 51 89 08 06
	[  +7.499447] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 94 5e c7 3f 4b 08 06
	[  +2.369420] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e c3 89 f6 d7 7a 08 06
	[  +0.040035] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 c4 e3 6e 26 f0 08 06
	[  +0.602015] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 00 55 ff 4b bc 08 06
	[ +19.949078] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e 36 8f 17 e5 e1 08 06
	[Sep30 10:23] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 66 21 be 13 6f ba 08 06
	[  +0.062942] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8e b7 38 6e cf 74 08 06
	[Sep30 10:24] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a e5 7e d0 1c 02 08 06
	[  +0.000427] IPv4: martian source 10.244.0.25 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 36 8e fa 03 c7 39 08 06
	[Sep30 10:32] IPv4: martian source 10.244.0.29 from 10.244.0.21, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 94 5e c7 3f 4b 08 06
	[  +1.552765] IPv4: martian source 10.244.0.21 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 8e fa 03 c7 39 08 06
	
	
	==> etcd [706fd7322467] <==
	{"level":"info","ts":"2024-09-30T10:21:06.771318Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-485025 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T10:21:06.771322Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T10:21:06.771485Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T10:21:06.771547Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T10:21:06.771581Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T10:21:06.772613Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T10:21:06.772802Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T10:21:06.773747Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-30T10:21:06.773879Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-30T10:21:06.776388Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T10:21:06.776479Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T10:21:06.776523Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-09-30T10:21:40.293080Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.767918ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T10:21:40.293162Z","caller":"traceutil/trace.go:171","msg":"trace[520933380] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:997; }","duration":"135.85455ms","start":"2024-09-30T10:21:40.157293Z","end":"2024-09-30T10:21:40.293147Z","steps":["trace[520933380] 'range keys from in-memory index tree'  (duration: 135.726151ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T10:21:40.293074Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.613397ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-vdjlp\" ","response":"range_response_count:1 size:5091"}
	{"level":"info","ts":"2024-09-30T10:21:40.293248Z","caller":"traceutil/trace.go:171","msg":"trace[1498556743] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-vdjlp; range_end:; response_count:1; response_revision:997; }","duration":"116.788216ms","start":"2024-09-30T10:21:40.176436Z","end":"2024-09-30T10:21:40.293224Z","steps":["trace[1498556743] 'range keys from in-memory index tree'  (duration: 116.545061ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T10:21:43.020699Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.509808ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T10:21:43.020754Z","caller":"traceutil/trace.go:171","msg":"trace[671667205] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1010; }","duration":"105.575946ms","start":"2024-09-30T10:21:42.915167Z","end":"2024-09-30T10:21:43.020743Z","steps":["trace[671667205] 'range keys from in-memory index tree'  (duration: 105.500742ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T10:21:49.575951Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.556625ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T10:21:49.576020Z","caller":"traceutil/trace.go:171","msg":"trace[1820763279] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1049; }","duration":"106.634498ms","start":"2024-09-30T10:21:49.469372Z","end":"2024-09-30T10:21:49.576007Z","steps":["trace[1820763279] 'range keys from in-memory index tree'  (duration: 106.504838ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:21:59.850169Z","caller":"traceutil/trace.go:171","msg":"trace[968818960] transaction","detail":"{read_only:false; response_revision:1131; number_of_response:1; }","duration":"151.592515ms","start":"2024-09-30T10:21:59.698555Z","end":"2024-09-30T10:21:59.850147Z","steps":["trace[968818960] 'process raft request'  (duration: 86.321396ms)","trace[968818960] 'compare'  (duration: 65.092095ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-30T10:22:09.905313Z","caller":"traceutil/trace.go:171","msg":"trace[794133770] transaction","detail":"{read_only:false; response_revision:1195; number_of_response:1; }","duration":"143.849263ms","start":"2024-09-30T10:22:09.761444Z","end":"2024-09-30T10:22:09.905293Z","steps":["trace[794133770] 'process raft request'  (duration: 62.596136ms)","trace[794133770] 'compare'  (duration: 81.140452ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-30T10:31:06.886668Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1855}
	{"level":"info","ts":"2024-09-30T10:31:06.909860Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1855,"took":"22.700701ms","hash":1581934560,"current-db-size-bytes":8851456,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4718592,"current-db-size-in-use":"4.7 MB"}
	{"level":"info","ts":"2024-09-30T10:31:06.909900Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1581934560,"revision":1855,"compact-revision":-1}
	
	
	==> gcp-auth [d9dbd70a3058] <==
	2024/09/30 10:24:40 Ready to write response ...
	2024/09/30 10:24:40 Ready to marshal response ...
	2024/09/30 10:24:40 Ready to write response ...
	2024/09/30 10:32:43 Ready to marshal response ...
	2024/09/30 10:32:43 Ready to write response ...
	2024/09/30 10:32:43 Ready to marshal response ...
	2024/09/30 10:32:43 Ready to write response ...
	2024/09/30 10:32:43 Ready to marshal response ...
	2024/09/30 10:32:43 Ready to write response ...
	2024/09/30 10:32:48 Ready to marshal response ...
	2024/09/30 10:32:48 Ready to write response ...
	2024/09/30 10:32:53 Ready to marshal response ...
	2024/09/30 10:32:53 Ready to write response ...
	2024/09/30 10:32:56 Ready to marshal response ...
	2024/09/30 10:32:56 Ready to write response ...
	2024/09/30 10:32:59 Ready to marshal response ...
	2024/09/30 10:32:59 Ready to write response ...
	2024/09/30 10:33:05 Ready to marshal response ...
	2024/09/30 10:33:05 Ready to write response ...
	2024/09/30 10:33:05 Ready to marshal response ...
	2024/09/30 10:33:05 Ready to write response ...
	2024/09/30 10:33:13 Ready to marshal response ...
	2024/09/30 10:33:13 Ready to write response ...
	2024/09/30 10:33:23 Ready to marshal response ...
	2024/09/30 10:33:23 Ready to write response ...
	
	
	==> kernel <==
	 10:33:54 up 16 min,  0 users,  load average: 1.61, 0.75, 0.46
	Linux addons-485025 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [23877de9b8f7] <==
	E0930 10:24:32.276233       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"volcano-controllers\" not found]"
	W0930 10:24:32.666986       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0930 10:24:32.858580       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0930 10:32:43.583572       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.247.206"}
	I0930 10:32:48.753738       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0930 10:32:48.916265       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.159.233"}
	I0930 10:32:49.387575       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0930 10:32:50.399749       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0930 10:32:53.786659       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0930 10:32:56.380679       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.56.109"}
	I0930 10:33:07.951609       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0930 10:33:29.231893       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0930 10:33:38.588741       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:33:38.588796       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 10:33:38.600709       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:33:38.600760       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 10:33:38.601540       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:33:38.601581       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 10:33:38.614558       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:33:38.614613       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 10:33:38.653531       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:33:38.653573       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0930 10:33:39.602270       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0930 10:33:39.654595       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0930 10:33:39.755779       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [bf4659e7f16d] <==
	W0930 10:33:40.663358       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:33:40.663399       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:33:41.232974       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:33:41.233011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:33:43.209306       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:33:43.209350       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:33:43.211178       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:33:43.211207       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:33:44.209923       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:33:44.209960       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 10:33:45.471307       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0930 10:33:45.471346       1 shared_informer.go:320] Caches are synced for resource quota
	I0930 10:33:45.964465       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0930 10:33:45.964525       1 shared_informer.go:320] Caches are synced for garbage collector
	W0930 10:33:46.227538       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:33:46.227582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:33:47.297242       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:33:47.297289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:33:48.108151       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:33:48.108195       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:33:48.489354       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:33:48.489390       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:33:50.705327       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:33:50.705373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 10:33:53.641436       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="8.017µs"
	
	
	==> kube-proxy [0bb3495be0d0] <==
	I0930 10:21:17.962370       1 server_linux.go:66] "Using iptables proxy"
	I0930 10:21:18.462662       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0930 10:21:18.462763       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 10:21:18.861527       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0930 10:21:18.861595       1 server_linux.go:169] "Using iptables Proxier"
	I0930 10:21:18.865613       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 10:21:18.865984       1 server.go:483] "Version info" version="v1.31.1"
	I0930 10:21:18.866006       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 10:21:18.954475       1 config.go:199] "Starting service config controller"
	I0930 10:21:18.954525       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 10:21:18.954477       1 config.go:105] "Starting endpoint slice config controller"
	I0930 10:21:18.954571       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 10:21:18.956071       1 config.go:328] "Starting node config controller"
	I0930 10:21:18.956083       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 10:21:19.055344       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 10:21:19.055423       1 shared_informer.go:320] Caches are synced for service config
	I0930 10:21:19.056282       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f8a8bdbbb7e9] <==
	E0930 10:21:08.352928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0930 10:21:08.352939       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:08.353032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0930 10:21:08.353126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0930 10:21:08.353149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0930 10:21:08.353150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:08.353160       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0930 10:21:08.353061       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0930 10:21:08.353179       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0930 10:21:08.353186       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:09.180801       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0930 10:21:09.180844       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:09.245392       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 10:21:09.245431       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0930 10:21:09.250626       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 10:21:09.250659       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:09.272720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 10:21:09.272768       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:09.301983       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0930 10:21:09.302030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:09.398345       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0930 10:21:09.398381       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:09.405589       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0930 10:21:09.405645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0930 10:21:12.475293       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 10:33:44 addons-485025 kubelet[2439]: I0930 10:33:44.121672    2439 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11ae3e07-cdbe-4782-9ad2-bd52db2fb6dd-kube-api-access-5kfzs" (OuterVolumeSpecName: "kube-api-access-5kfzs") pod "11ae3e07-cdbe-4782-9ad2-bd52db2fb6dd" (UID: "11ae3e07-cdbe-4782-9ad2-bd52db2fb6dd"). InnerVolumeSpecName "kube-api-access-5kfzs". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:33:44 addons-485025 kubelet[2439]: I0930 10:33:44.220126    2439 reconciler_common.go:288] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11ae3e07-cdbe-4782-9ad2-bd52db2fb6dd-config-volume\") on node \"addons-485025\" DevicePath \"\""
	Sep 30 10:33:44 addons-485025 kubelet[2439]: I0930 10:33:44.220154    2439 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5kfzs\" (UniqueName: \"kubernetes.io/projected/11ae3e07-cdbe-4782-9ad2-bd52db2fb6dd-kube-api-access-5kfzs\") on node \"addons-485025\" DevicePath \"\""
	Sep 30 10:33:44 addons-485025 kubelet[2439]: I0930 10:33:44.736739    2439 scope.go:117] "RemoveContainer" containerID="10a2f56d9b7c12ff7d5eed873bf0b40d45aad54fc1200d53444ace0ddfc77f52"
	Sep 30 10:33:44 addons-485025 kubelet[2439]: I0930 10:33:44.750182    2439 scope.go:117] "RemoveContainer" containerID="10a2f56d9b7c12ff7d5eed873bf0b40d45aad54fc1200d53444ace0ddfc77f52"
	Sep 30 10:33:44 addons-485025 kubelet[2439]: E0930 10:33:44.750825    2439 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 10a2f56d9b7c12ff7d5eed873bf0b40d45aad54fc1200d53444ace0ddfc77f52" containerID="10a2f56d9b7c12ff7d5eed873bf0b40d45aad54fc1200d53444ace0ddfc77f52"
	Sep 30 10:33:44 addons-485025 kubelet[2439]: I0930 10:33:44.750858    2439 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"10a2f56d9b7c12ff7d5eed873bf0b40d45aad54fc1200d53444ace0ddfc77f52"} err="failed to get container status \"10a2f56d9b7c12ff7d5eed873bf0b40d45aad54fc1200d53444ace0ddfc77f52\": rpc error: code = Unknown desc = Error response from daemon: No such container: 10a2f56d9b7c12ff7d5eed873bf0b40d45aad54fc1200d53444ace0ddfc77f52"
	Sep 30 10:33:46 addons-485025 kubelet[2439]: I0930 10:33:46.657171    2439 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11ae3e07-cdbe-4782-9ad2-bd52db2fb6dd" path="/var/lib/kubelet/pods/11ae3e07-cdbe-4782-9ad2-bd52db2fb6dd/volumes"
	Sep 30 10:33:50 addons-485025 kubelet[2439]: E0930 10:33:50.651818    2439 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="bf96a69d-60dc-47c2-b018-cc9bd3efd4d6"
	Sep 30 10:33:50 addons-485025 kubelet[2439]: E0930 10:33:50.651879    2439 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="fae7290f-6bd0-4d2a-ae59-c439a980c2fa"
	Sep 30 10:33:53 addons-485025 kubelet[2439]: I0930 10:33:53.375343    2439 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr7cf\" (UniqueName: \"kubernetes.io/projected/bf96a69d-60dc-47c2-b018-cc9bd3efd4d6-kube-api-access-nr7cf\") pod \"bf96a69d-60dc-47c2-b018-cc9bd3efd4d6\" (UID: \"bf96a69d-60dc-47c2-b018-cc9bd3efd4d6\") "
	Sep 30 10:33:53 addons-485025 kubelet[2439]: I0930 10:33:53.375401    2439 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bf96a69d-60dc-47c2-b018-cc9bd3efd4d6-gcp-creds\") pod \"bf96a69d-60dc-47c2-b018-cc9bd3efd4d6\" (UID: \"bf96a69d-60dc-47c2-b018-cc9bd3efd4d6\") "
	Sep 30 10:33:53 addons-485025 kubelet[2439]: I0930 10:33:53.375477    2439 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf96a69d-60dc-47c2-b018-cc9bd3efd4d6-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "bf96a69d-60dc-47c2-b018-cc9bd3efd4d6" (UID: "bf96a69d-60dc-47c2-b018-cc9bd3efd4d6"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 30 10:33:53 addons-485025 kubelet[2439]: I0930 10:33:53.377052    2439 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf96a69d-60dc-47c2-b018-cc9bd3efd4d6-kube-api-access-nr7cf" (OuterVolumeSpecName: "kube-api-access-nr7cf") pod "bf96a69d-60dc-47c2-b018-cc9bd3efd4d6" (UID: "bf96a69d-60dc-47c2-b018-cc9bd3efd4d6"). InnerVolumeSpecName "kube-api-access-nr7cf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:33:53 addons-485025 kubelet[2439]: I0930 10:33:53.476280    2439 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nr7cf\" (UniqueName: \"kubernetes.io/projected/bf96a69d-60dc-47c2-b018-cc9bd3efd4d6-kube-api-access-nr7cf\") on node \"addons-485025\" DevicePath \"\""
	Sep 30 10:33:53 addons-485025 kubelet[2439]: I0930 10:33:53.476317    2439 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bf96a69d-60dc-47c2-b018-cc9bd3efd4d6-gcp-creds\") on node \"addons-485025\" DevicePath \"\""
	Sep 30 10:33:53 addons-485025 kubelet[2439]: I0930 10:33:53.897339    2439 scope.go:117] "RemoveContainer" containerID="1a3722819018fefe7024fda3df77dbcf4e0eb732b32ea7395d99c42af465f404"
	Sep 30 10:33:53 addons-485025 kubelet[2439]: I0930 10:33:53.979269    2439 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rw65\" (UniqueName: \"kubernetes.io/projected/da79db35-9dbe-40b6-bc10-153757b8bf2a-kube-api-access-7rw65\") pod \"da79db35-9dbe-40b6-bc10-153757b8bf2a\" (UID: \"da79db35-9dbe-40b6-bc10-153757b8bf2a\") "
	Sep 30 10:33:53 addons-485025 kubelet[2439]: I0930 10:33:53.981161    2439 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da79db35-9dbe-40b6-bc10-153757b8bf2a-kube-api-access-7rw65" (OuterVolumeSpecName: "kube-api-access-7rw65") pod "da79db35-9dbe-40b6-bc10-153757b8bf2a" (UID: "da79db35-9dbe-40b6-bc10-153757b8bf2a"). InnerVolumeSpecName "kube-api-access-7rw65". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:33:54 addons-485025 kubelet[2439]: I0930 10:33:54.079729    2439 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfd4n\" (UniqueName: \"kubernetes.io/projected/0863352b-681f-45ef-a925-ee3ba3eb1198-kube-api-access-bfd4n\") pod \"0863352b-681f-45ef-a925-ee3ba3eb1198\" (UID: \"0863352b-681f-45ef-a925-ee3ba3eb1198\") "
	Sep 30 10:33:54 addons-485025 kubelet[2439]: I0930 10:33:54.079806    2439 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7rw65\" (UniqueName: \"kubernetes.io/projected/da79db35-9dbe-40b6-bc10-153757b8bf2a-kube-api-access-7rw65\") on node \"addons-485025\" DevicePath \"\""
	Sep 30 10:33:54 addons-485025 kubelet[2439]: I0930 10:33:54.081511    2439 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0863352b-681f-45ef-a925-ee3ba3eb1198-kube-api-access-bfd4n" (OuterVolumeSpecName: "kube-api-access-bfd4n") pod "0863352b-681f-45ef-a925-ee3ba3eb1198" (UID: "0863352b-681f-45ef-a925-ee3ba3eb1198"). InnerVolumeSpecName "kube-api-access-bfd4n". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:33:54 addons-485025 kubelet[2439]: I0930 10:33:54.179990    2439 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bfd4n\" (UniqueName: \"kubernetes.io/projected/0863352b-681f-45ef-a925-ee3ba3eb1198-kube-api-access-bfd4n\") on node \"addons-485025\" DevicePath \"\""
	Sep 30 10:33:54 addons-485025 kubelet[2439]: I0930 10:33:54.656977    2439 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf96a69d-60dc-47c2-b018-cc9bd3efd4d6" path="/var/lib/kubelet/pods/bf96a69d-60dc-47c2-b018-cc9bd3efd4d6/volumes"
	Sep 30 10:33:54 addons-485025 kubelet[2439]: I0930 10:33:54.657305    2439 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da79db35-9dbe-40b6-bc10-153757b8bf2a" path="/var/lib/kubelet/pods/da79db35-9dbe-40b6-bc10-153757b8bf2a/volumes"
	
	
	==> storage-provisioner [570be091a2bb] <==
	I0930 10:21:23.561599       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 10:21:23.649937       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 10:21:23.649986       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0930 10:21:23.658729       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0930 10:21:23.658996       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-485025_fec087af-ed5a-43de-b9e8-de88339482b5!
	I0930 10:21:23.659567       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dda72a52-478f-4e63-98b9-deeb94aee097", APIVersion:"v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-485025_fec087af-ed5a-43de-b9e8-de88339482b5 became leader
	I0930 10:21:23.759464       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-485025_fec087af-ed5a-43de-b9e8-de88339482b5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-485025 -n addons-485025
helpers_test.go:261: (dbg) Run:  kubectl --context addons-485025 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-485025 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-485025 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-485025/192.168.49.2
	Start Time:       Mon, 30 Sep 2024 10:24:40 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fmtkn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-fmtkn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m15s                  default-scheduler  Successfully assigned default/busybox to addons-485025
	  Normal   Pulling    7m43s (x4 over 9m14s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m43s (x4 over 9m14s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m43s (x4 over 9m14s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m32s (x6 over 9m14s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m8s (x21 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (72.32s)

                                                
                                    

Test pass (321/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 4.77
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 3.73
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.19
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 0.95
21 TestBinaryMirror 0.73
22 TestOffline 82.91
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 208.38
29 TestAddons/serial/Volcano 39.09
31 TestAddons/serial/GCPAuth/Namespaces 0.12
34 TestAddons/parallel/Ingress 17.03
35 TestAddons/parallel/InspektorGadget 11.64
36 TestAddons/parallel/MetricsServer 5.52
38 TestAddons/parallel/CSI 44.34
39 TestAddons/parallel/Headlamp 15.23
40 TestAddons/parallel/CloudSpanner 5.4
41 TestAddons/parallel/LocalPath 50.66
42 TestAddons/parallel/NvidiaDevicePlugin 6.38
43 TestAddons/parallel/Yakd 10.58
44 TestAddons/StoppedEnableDisable 11.02
45 TestCertOptions 30.01
46 TestCertExpiration 237.18
47 TestDockerFlags 36.28
48 TestForceSystemdFlag 30.7
49 TestForceSystemdEnv 31.83
51 TestKVMDriverInstallOrUpdate 1.11
55 TestErrorSpam/setup 23.21
56 TestErrorSpam/start 0.52
57 TestErrorSpam/status 0.8
58 TestErrorSpam/pause 1.09
59 TestErrorSpam/unpause 1.2
60 TestErrorSpam/stop 10.78
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 59.53
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 27.92
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.06
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.16
72 TestFunctional/serial/CacheCmd/cache/add_local 0.65
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
74 TestFunctional/serial/CacheCmd/cache/list 0.04
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.14
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
80 TestFunctional/serial/ExtraConfig 38.57
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 0.91
83 TestFunctional/serial/LogsFileCmd 0.92
84 TestFunctional/serial/InvalidService 4.36
86 TestFunctional/parallel/ConfigCmd 0.36
87 TestFunctional/parallel/DashboardCmd 10.67
88 TestFunctional/parallel/DryRun 0.4
89 TestFunctional/parallel/InternationalLanguage 0.19
90 TestFunctional/parallel/StatusCmd 1.16
94 TestFunctional/parallel/ServiceCmdConnect 6.71
95 TestFunctional/parallel/AddonsCmd 0.14
96 TestFunctional/parallel/PersistentVolumeClaim 27.19
98 TestFunctional/parallel/SSHCmd 0.53
99 TestFunctional/parallel/CpCmd 1.87
100 TestFunctional/parallel/MySQL 22.93
101 TestFunctional/parallel/FileSync 0.23
102 TestFunctional/parallel/CertSync 1.89
106 TestFunctional/parallel/NodeLabels 0.07
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.34
110 TestFunctional/parallel/License 0.16
111 TestFunctional/parallel/ServiceCmd/DeployApp 11.21
112 TestFunctional/parallel/ProfileCmd/profile_not_create 0.53
113 TestFunctional/parallel/ProfileCmd/profile_list 0.45
114 TestFunctional/parallel/Version/short 0.04
115 TestFunctional/parallel/Version/components 0.45
116 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
121 TestFunctional/parallel/ImageCommands/ImageBuild 2.38
122 TestFunctional/parallel/ImageCommands/Setup 0.45
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.27
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.82
125 TestFunctional/parallel/DockerEnv/bash 1.06
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 0.96
127 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
128 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
129 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.42
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.7
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.49
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.24
139 TestFunctional/parallel/ServiceCmd/List 0.87
140 TestFunctional/parallel/ServiceCmd/JSONOutput 1.34
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
142 TestFunctional/parallel/ServiceCmd/Format 0.43
143 TestFunctional/parallel/ServiceCmd/URL 0.47
144 TestFunctional/parallel/MountCmd/any-port 14.6
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
151 TestFunctional/parallel/MountCmd/specific-port 1.57
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.39
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 100.55
160 TestMultiControlPlane/serial/DeployApp 4.94
161 TestMultiControlPlane/serial/PingHostFromPods 1
162 TestMultiControlPlane/serial/AddWorkerNode 22.84
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.8
165 TestMultiControlPlane/serial/CopyFile 14.65
166 TestMultiControlPlane/serial/StopSecondaryNode 11.39
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.63
168 TestMultiControlPlane/serial/RestartSecondaryNode 111.39
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.79
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 169.95
171 TestMultiControlPlane/serial/DeleteSecondaryNode 9.09
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.6
173 TestMultiControlPlane/serial/StopCluster 32.24
174 TestMultiControlPlane/serial/RestartCluster 94.7
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.61
176 TestMultiControlPlane/serial/AddSecondaryNode 31.58
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.79
180 TestImageBuild/serial/Setup 23.16
181 TestImageBuild/serial/NormalBuild 1.31
182 TestImageBuild/serial/BuildWithBuildArg 0.74
183 TestImageBuild/serial/BuildWithDockerIgnore 0.52
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.54
188 TestJSONOutput/start/Command 64.91
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.47
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.39
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 10.87
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.19
213 TestKicCustomNetwork/create_custom_network 22.44
214 TestKicCustomNetwork/use_default_bridge_network 22.47
215 TestKicExistingNetwork 25.45
216 TestKicCustomSubnet 25.65
217 TestKicStaticIP 22.84
218 TestMainNoArgs 0.04
219 TestMinikubeProfile 49.05
222 TestMountStart/serial/StartWithMountFirst 6.17
223 TestMountStart/serial/VerifyMountFirst 0.22
224 TestMountStart/serial/StartWithMountSecond 6.16
225 TestMountStart/serial/VerifyMountSecond 0.22
226 TestMountStart/serial/DeleteFirst 1.42
227 TestMountStart/serial/VerifyMountPostDelete 0.22
228 TestMountStart/serial/Stop 1.16
229 TestMountStart/serial/RestartStopped 7.58
230 TestMountStart/serial/VerifyMountPostStop 0.22
233 TestMultiNode/serial/FreshStart2Nodes 56.05
234 TestMultiNode/serial/DeployApp2Nodes 35.51
235 TestMultiNode/serial/PingHostFrom2Pods 0.68
236 TestMultiNode/serial/AddNode 17.21
237 TestMultiNode/serial/MultiNodeLabels 0.07
238 TestMultiNode/serial/ProfileList 0.58
239 TestMultiNode/serial/CopyFile 8.27
240 TestMultiNode/serial/StopNode 2.02
241 TestMultiNode/serial/StartAfterStop 9.49
242 TestMultiNode/serial/RestartKeepsNodes 104.57
243 TestMultiNode/serial/DeleteNode 5.08
244 TestMultiNode/serial/StopMultiNode 21.29
245 TestMultiNode/serial/RestartMultiNode 52.59
246 TestMultiNode/serial/ValidateNameConflict 23.05
251 TestPreload 84.99
253 TestScheduledStopUnix 94.12
254 TestSkaffold 97.78
256 TestInsufficientStorage 12.33
257 TestRunningBinaryUpgrade 74.92
259 TestKubernetesUpgrade 344.29
260 TestMissingContainerUpgrade 151.49
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
263 TestStoppedBinaryUpgrade/Setup 0.53
264 TestNoKubernetes/serial/StartWithK8s 30.15
265 TestStoppedBinaryUpgrade/Upgrade 115.42
266 TestNoKubernetes/serial/StartWithStopK8s 16.19
267 TestNoKubernetes/serial/Start 13.63
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
269 TestNoKubernetes/serial/ProfileList 1.61
270 TestNoKubernetes/serial/Stop 1.17
271 TestNoKubernetes/serial/StartNoArgs 6.72
272 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
284 TestStoppedBinaryUpgrade/MinikubeLogs 1.48
293 TestPause/serial/Start 73.49
294 TestNetworkPlugins/group/auto/Start 34.72
295 TestNetworkPlugins/group/auto/KubeletFlags 0.23
296 TestNetworkPlugins/group/auto/NetCatPod 9.18
297 TestPause/serial/SecondStartNoReconfiguration 34.22
298 TestNetworkPlugins/group/auto/DNS 0.12
299 TestNetworkPlugins/group/auto/Localhost 0.11
300 TestNetworkPlugins/group/auto/HairPin 0.11
301 TestNetworkPlugins/group/kindnet/Start 35.99
302 TestPause/serial/Pause 0.5
303 TestPause/serial/VerifyStatus 0.27
304 TestPause/serial/Unpause 0.43
305 TestPause/serial/PauseAgain 0.6
306 TestPause/serial/DeletePaused 2.14
307 TestPause/serial/VerifyDeletedResources 16.35
308 TestNetworkPlugins/group/calico/Start 32.98
309 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
310 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
311 TestNetworkPlugins/group/kindnet/NetCatPod 10.18
312 TestNetworkPlugins/group/kindnet/DNS 24.95
313 TestNetworkPlugins/group/calico/ControllerPod 20.01
314 TestNetworkPlugins/group/custom-flannel/Start 46.21
315 TestNetworkPlugins/group/kindnet/Localhost 0.12
316 TestNetworkPlugins/group/kindnet/HairPin 0.11
317 TestNetworkPlugins/group/calico/KubeletFlags 0.28
318 TestNetworkPlugins/group/calico/NetCatPod 10.69
319 TestNetworkPlugins/group/calico/DNS 0.14
320 TestNetworkPlugins/group/calico/Localhost 0.14
321 TestNetworkPlugins/group/calico/HairPin 0.12
322 TestNetworkPlugins/group/false/Start 40.75
323 TestNetworkPlugins/group/enable-default-cni/Start 69.43
324 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
325 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.21
326 TestNetworkPlugins/group/custom-flannel/DNS 0.13
327 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
328 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
329 TestNetworkPlugins/group/false/KubeletFlags 0.27
330 TestNetworkPlugins/group/false/NetCatPod 10.21
331 TestNetworkPlugins/group/false/DNS 0.16
332 TestNetworkPlugins/group/false/Localhost 0.15
333 TestNetworkPlugins/group/false/HairPin 0.14
334 TestNetworkPlugins/group/flannel/Start 47.79
335 TestNetworkPlugins/group/kubenet/Start 64.87
336 TestNetworkPlugins/group/bridge/Start 42.94
337 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
338 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.31
339 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
340 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
341 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
342 TestNetworkPlugins/group/flannel/ControllerPod 6.01
343 TestNetworkPlugins/group/flannel/KubeletFlags 0.35
344 TestNetworkPlugins/group/flannel/NetCatPod 9.22
346 TestStartStop/group/old-k8s-version/serial/FirstStart 153.82
347 TestNetworkPlugins/group/flannel/DNS 0.14
348 TestNetworkPlugins/group/flannel/Localhost 0.12
349 TestNetworkPlugins/group/flannel/HairPin 0.12
350 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
351 TestNetworkPlugins/group/bridge/NetCatPod 10.17
352 TestNetworkPlugins/group/bridge/DNS 0.16
353 TestNetworkPlugins/group/bridge/Localhost 0.16
354 TestNetworkPlugins/group/bridge/HairPin 0.22
355 TestNetworkPlugins/group/kubenet/KubeletFlags 0.3
356 TestNetworkPlugins/group/kubenet/NetCatPod 8.19
358 TestStartStop/group/no-preload/serial/FirstStart 69.98
359 TestNetworkPlugins/group/kubenet/DNS 0.19
360 TestNetworkPlugins/group/kubenet/Localhost 0.14
361 TestNetworkPlugins/group/kubenet/HairPin 0.15
363 TestStartStop/group/embed-certs/serial/FirstStart 65.53
365 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 65.8
366 TestStartStop/group/no-preload/serial/DeployApp 9.23
367 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.82
368 TestStartStop/group/no-preload/serial/Stop 10.78
369 TestStartStop/group/embed-certs/serial/DeployApp 8.25
370 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.9
371 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.27
372 TestStartStop/group/no-preload/serial/SecondStart 263.12
373 TestStartStop/group/embed-certs/serial/Stop 10.83
374 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.27
375 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.85
376 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.73
377 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.27
378 TestStartStop/group/embed-certs/serial/SecondStart 263.44
379 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.31
380 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 262.85
381 TestStartStop/group/old-k8s-version/serial/DeployApp 10.4
382 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.73
383 TestStartStop/group/old-k8s-version/serial/Stop 10.86
384 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.16
385 TestStartStop/group/old-k8s-version/serial/SecondStart 23.66
386 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 28.01
387 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
388 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
389 TestStartStop/group/old-k8s-version/serial/Pause 2.23
391 TestStartStop/group/newest-cni/serial/FirstStart 27.08
392 TestStartStop/group/newest-cni/serial/DeployApp 0
393 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.96
394 TestStartStop/group/newest-cni/serial/Stop 9.96
395 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
396 TestStartStop/group/newest-cni/serial/SecondStart 14.13
397 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
398 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
399 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
400 TestStartStop/group/newest-cni/serial/Pause 2.38
401 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
402 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
403 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
404 TestStartStop/group/no-preload/serial/Pause 2.25
405 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
406 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
407 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
408 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.19
409 TestStartStop/group/embed-certs/serial/Pause 2.27
410 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
411 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.19
412 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.22
x
+
TestDownloadOnly/v1.20.0/json-events (4.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-912153 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-912153 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.769891565s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (4.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0930 10:20:26.740180   10447 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0930 10:20:26.740265   10447 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-3685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-912153
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-912153: exit status 85 (54.457354ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-912153 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |          |
	|         | -p download-only-912153        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 10:20:22
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 10:20:22.005336   10459 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:20:22.005583   10459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:22.005591   10459 out.go:358] Setting ErrFile to fd 2...
	I0930 10:20:22.005595   10459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:22.005773   10459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3685/.minikube/bin
	W0930 10:20:22.005886   10459 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19734-3685/.minikube/config/config.json: open /home/jenkins/minikube-integration/19734-3685/.minikube/config/config.json: no such file or directory
	I0930 10:20:22.006430   10459 out.go:352] Setting JSON to true
	I0930 10:20:22.007285   10459 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":170,"bootTime":1727691452,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 10:20:22.007351   10459 start.go:139] virtualization: kvm guest
	I0930 10:20:22.009669   10459 out.go:97] [download-only-912153] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 10:20:22.009783   10459 notify.go:220] Checking for updates...
	W0930 10:20:22.009779   10459 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19734-3685/.minikube/cache/preloaded-tarball: no such file or directory
	I0930 10:20:22.011088   10459 out.go:169] MINIKUBE_LOCATION=19734
	I0930 10:20:22.012299   10459 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:20:22.013555   10459 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19734-3685/kubeconfig
	I0930 10:20:22.014618   10459 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3685/.minikube
	I0930 10:20:22.015624   10459 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0930 10:20:22.017461   10459 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0930 10:20:22.017643   10459 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:20:22.039761   10459 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:20:22.039841   10459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:20:22.488959   10459 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-30 10:20:22.480525404 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0930 10:20:22.489064   10459 docker.go:318] overlay module found
	I0930 10:20:22.490507   10459 out.go:97] Using the docker driver based on user configuration
	I0930 10:20:22.490531   10459 start.go:297] selected driver: docker
	I0930 10:20:22.490537   10459 start.go:901] validating driver "docker" against <nil>
	I0930 10:20:22.490614   10459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:20:22.534595   10459 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-30 10:20:22.52678442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0930 10:20:22.534753   10459 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 10:20:22.535229   10459 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0930 10:20:22.535386   10459 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0930 10:20:22.537174   10459 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-912153 host does not exist
	  To start a cluster, run: "minikube start -p download-only-912153"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-912153
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (3.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-816940 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-816940 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (3.732670548s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (3.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0930 10:20:30.840857   10447 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0930 10:20:30.840901   10447 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-3685/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-816940
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-816940: exit status 85 (55.771283ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-912153 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |                     |
	|         | -p download-only-912153        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| delete  | -p download-only-912153        | download-only-912153 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| start   | -o=json --download-only        | download-only-816940 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |                     |
	|         | -p download-only-816940        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 10:20:27
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 10:20:27.146581   10808 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:20:27.146854   10808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:27.146864   10808 out.go:358] Setting ErrFile to fd 2...
	I0930 10:20:27.146868   10808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:27.147063   10808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3685/.minikube/bin
	I0930 10:20:27.147631   10808 out.go:352] Setting JSON to true
	I0930 10:20:27.148515   10808 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":175,"bootTime":1727691452,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 10:20:27.148606   10808 start.go:139] virtualization: kvm guest
	I0930 10:20:27.150682   10808 out.go:97] [download-only-816940] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 10:20:27.150843   10808 notify.go:220] Checking for updates...
	I0930 10:20:27.152205   10808 out.go:169] MINIKUBE_LOCATION=19734
	I0930 10:20:27.153452   10808 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:20:27.154745   10808 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19734-3685/kubeconfig
	I0930 10:20:27.156264   10808 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3685/.minikube
	I0930 10:20:27.157452   10808 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0930 10:20:27.159782   10808 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0930 10:20:27.160043   10808 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:20:27.180480   10808 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:20:27.180544   10808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:20:27.225024   10808 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-30 10:20:27.216067121 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0930 10:20:27.225145   10808 docker.go:318] overlay module found
	I0930 10:20:27.226910   10808 out.go:97] Using the docker driver based on user configuration
	I0930 10:20:27.226930   10808 start.go:297] selected driver: docker
	I0930 10:20:27.226938   10808 start.go:901] validating driver "docker" against <nil>
	I0930 10:20:27.227006   10808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:20:27.272413   10808 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-30 10:20:27.26382728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0930 10:20:27.272599   10808 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 10:20:27.273079   10808 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0930 10:20:27.273203   10808 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0930 10:20:27.274935   10808 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-816940 host does not exist
	  To start a cluster, run: "minikube start -p download-only-816940"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-816940
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.95s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-079911 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-079911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-079911
--- PASS: TestDownloadOnlyKic (0.95s)

                                                
                                    
x
+
TestBinaryMirror (0.73s)

                                                
                                                
=== RUN   TestBinaryMirror
I0930 10:20:32.399425   10447 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-919884 --alsologtostderr --binary-mirror http://127.0.0.1:33823 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-919884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-919884
--- PASS: TestBinaryMirror (0.73s)

                                                
                                    
x
+
TestOffline (82.91s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-778899 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-778899 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m20.593280767s)
helpers_test.go:175: Cleaning up "offline-docker-778899" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-778899
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-778899: (2.319815095s)
--- PASS: TestOffline (82.91s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-485025
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-485025: exit status 85 (51.862083ms)

                                                
                                                
-- stdout --
	* Profile "addons-485025" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-485025"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-485025
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-485025: exit status 85 (50.659547ms)

                                                
                                                
-- stdout --
	* Profile "addons-485025" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-485025"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (208.38s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-485025 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-485025 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m28.375009595s)
--- PASS: TestAddons/Setup (208.38s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.09s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 9.739285ms
addons_test.go:843: volcano-admission stabilized in 9.774525ms
addons_test.go:835: volcano-scheduler stabilized in 10.006077ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-x8c59" [ecf83aa6-75c2-4e87-89a6-8cb736a62656] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.00305619s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-sps42" [b5619955-3f77-4f39-bc7c-8c56b79863b8] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003634744s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-gsxvk" [f3611e4d-2012-424a-9f2d-e296d4937f24] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003317453s
addons_test.go:870: (dbg) Run:  kubectl --context addons-485025 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-485025 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-485025 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [450cab0f-5af1-4f9e-9ae8-129e06eed08e] Pending
helpers_test.go:344: "test-job-nginx-0" [450cab0f-5af1-4f9e-9ae8-129e06eed08e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [450cab0f-5af1-4f9e-9ae8-129e06eed08e] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004134249s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p addons-485025 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p addons-485025 addons disable volcano --alsologtostderr -v=1: (10.767030944s)
--- PASS: TestAddons/serial/Volcano (39.09s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-485025 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-485025 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-485025 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-485025 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-485025 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [49528a79-4cbf-4e0f-814e-8de8242db115] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [49528a79-4cbf-4e0f-814e-8de8242db115] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.003637197s
I0930 10:32:55.927243   10447 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-485025 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-485025 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-485025 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-485025 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-485025 addons disable ingress-dns --alsologtostderr -v=1: (1.38912067s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-485025 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-485025 addons disable ingress --alsologtostderr -v=1: (7.547895392s)
--- PASS: TestAddons/parallel/Ingress (17.03s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.64s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-f7x2t" [145e3ce4-b4ae-4d46-a371-5c9dda52d6d8] Running
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003232456s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-485025
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-485025: (5.635632039s)
--- PASS: TestAddons/parallel/InspektorGadget (11.64s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.169542ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-kvtbh" [213313da-61ef-4454-a082-5c64f6fad3d1] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002872464s
addons_test.go:413: (dbg) Run:  kubectl --context addons-485025 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-485025 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.52s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.34s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0930 10:32:54.623377   10447 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0930 10:32:54.627272   10447 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0930 10:32:54.627293   10447 kapi.go:107] duration metric: took 3.930848ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 3.938473ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-485025 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-485025 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [85b8415b-db1d-4826-9b4d-c3aca9035b93] Pending
helpers_test.go:344: "task-pv-pod" [85b8415b-db1d-4826-9b4d-c3aca9035b93] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [85b8415b-db1d-4826-9b4d-c3aca9035b93] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003611302s
addons_test.go:528: (dbg) Run:  kubectl --context addons-485025 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-485025 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-485025 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-485025 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-485025 delete pod task-pv-pod: (1.145259228s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-485025 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-485025 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-485025 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [95436ecf-32a1-4e86-aeb9-290c08842caf] Pending
helpers_test.go:344: "task-pv-pod-restore" [95436ecf-32a1-4e86-aeb9-290c08842caf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [95436ecf-32a1-4e86-aeb9-290c08842caf] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00386444s
addons_test.go:570: (dbg) Run:  kubectl --context addons-485025 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-485025 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-485025 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-485025 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-485025 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.454086632s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-485025 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (44.34s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-485025 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-dmngf" [9e92ee0f-1b52-4828-ac80-fc604205494f] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-dmngf" [9e92ee0f-1b52-4828-ac80-fc604205494f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-dmngf" [9e92ee0f-1b52-4828-ac80-fc604205494f] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003541354s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-485025 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-485025 addons disable headlamp --alsologtostderr -v=1: (5.587910937s)
--- PASS: TestAddons/parallel/Headlamp (15.23s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-m96sj" [3af09859-cc19-4a95-a724-6e017b0c3e92] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002811761s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-485025
--- PASS: TestAddons/parallel/CloudSpanner (5.40s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (50.66s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-485025 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-485025 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-485025 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [da8a0ccc-b9e5-46f7-8581-06ccc22f4624] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [da8a0ccc-b9e5-46f7-8581-06ccc22f4624] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [da8a0ccc-b9e5-46f7-8581-06ccc22f4624] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003502972s
addons_test.go:938: (dbg) Run:  kubectl --context addons-485025 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-485025 ssh "cat /opt/local-path-provisioner/pvc-c9d28883-8cdc-411a-b481-ed6040da0be1_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-485025 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-485025 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-485025 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-amd64 -p addons-485025 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.886831344s)
--- PASS: TestAddons/parallel/LocalPath (50.66s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.38s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-5zsrh" [1716d456-7b54-4982-b487-8bf11f302e7f] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.002720353s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-485025
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.38s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-4ctsb" [ce377b79-415c-43d2-9693-d75aa137625d] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00335082s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-485025 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-485025 addons disable yakd --alsologtostderr -v=1: (5.572519345s)
--- PASS: TestAddons/parallel/Yakd (10.58s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.02s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-485025
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-485025: (10.794620191s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-485025
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-485025
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-485025
--- PASS: TestAddons/StoppedEnableDisable (11.02s)

                                                
                                    
x
+
TestCertOptions (30.01s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-051066 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-051066 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (27.268960651s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-051066 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-051066 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-051066 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-051066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-051066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-051066: (2.157964483s)
--- PASS: TestCertOptions (30.01s)

                                                
                                    
x
+
TestCertExpiration (237.18s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-833648 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-833648 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (34.18920863s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-833648 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-833648 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (20.775461357s)
helpers_test.go:175: Cleaning up "cert-expiration-833648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-833648
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-833648: (2.213902591s)
--- PASS: TestCertExpiration (237.18s)

                                                
                                    
x
+
TestDockerFlags (36.28s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-895683 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-895683 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (33.291688446s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-895683 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-895683 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-895683" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-895683
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-895683: (2.36180908s)
--- PASS: TestDockerFlags (36.28s)

                                                
                                    
x
+
TestForceSystemdFlag (30.7s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-408551 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-408551 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (25.244136025s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-408551 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-408551" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-408551
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-408551: (5.134882336s)
--- PASS: TestForceSystemdFlag (30.70s)

                                                
                                    
x
+
TestForceSystemdEnv (31.83s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-085704 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-085704 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (29.386988558s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-085704 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-085704" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-085704
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-085704: (2.078802039s)
--- PASS: TestForceSystemdEnv (31.83s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.11s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0930 11:04:44.166559   10447 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0930 11:04:44.166739   10447 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0930 11:04:44.200220   10447 install.go:62] docker-machine-driver-kvm2: exit status 1
W0930 11:04:44.200511   10447 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0930 11:04:44.200576   10447 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate444166187/001/docker-machine-driver-kvm2
I0930 11:04:44.327922   10447 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate444166187/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4670640 0x4670640 0x4670640 0x4670640 0x4670640 0x4670640 0x4670640] Decompressors:map[bz2:0xc000789a00 gz:0xc000789a08 tar:0xc0007899b0 tar.bz2:0xc0007899c0 tar.gz:0xc0007899d0 tar.xz:0xc0007899e0 tar.zst:0xc0007899f0 tbz2:0xc0007899c0 tgz:0xc0007899d0 txz:0xc0007899e0 tzst:0xc0007899f0 xz:0xc000789a20 zip:0xc000789a30 zst:0xc000789a28] Getters:map[file:0xc001b98ec0 http:0xc000523770 https:0xc0005237c0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0930 11:04:44.327963   10447 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate444166187/001/docker-machine-driver-kvm2
I0930 11:04:44.839849   10447 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0930 11:04:44.839935   10447 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0930 11:04:44.865718   10447 install.go:137] /home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0930 11:04:44.865752   10447 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0930 11:04:44.865828   10447 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0930 11:04:44.865861   10447 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate444166187/002/docker-machine-driver-kvm2
I0930 11:04:44.888690   10447 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate444166187/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4670640 0x4670640 0x4670640 0x4670640 0x4670640 0x4670640 0x4670640] Decompressors:map[bz2:0xc000789a00 gz:0xc000789a08 tar:0xc0007899b0 tar.bz2:0xc0007899c0 tar.gz:0xc0007899d0 tar.xz:0xc0007899e0 tar.zst:0xc0007899f0 tbz2:0xc0007899c0 tgz:0xc0007899d0 txz:0xc0007899e0 tzst:0xc0007899f0 xz:0xc000789a20 zip:0xc000789a30 zst:0xc000789a28] Getters:map[file:0xc015afe4a0 http:0xc000817d10 https:0xc000817d60] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0930 11:04:44.888728   10447 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate444166187/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.11s)

                                                
                                    
x
+
TestErrorSpam/setup (23.21s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-533158 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-533158 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-533158 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-533158 --driver=docker  --container-runtime=docker: (23.214224973s)
--- PASS: TestErrorSpam/setup (23.21s)

                                                
                                    
x
+
TestErrorSpam/start (0.52s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-533158 --log_dir /tmp/nospam-533158 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-533158 --log_dir /tmp/nospam-533158 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-533158 --log_dir /tmp/nospam-533158 start --dry-run
--- PASS: TestErrorSpam/start (0.52s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-533158 --log_dir /tmp/nospam-533158 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-533158 --log_dir /tmp/nospam-533158 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-533158 --log_dir /tmp/nospam-533158 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-533158 --log_dir /tmp/nospam-533158 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-533158 --log_dir /tmp/nospam-533158 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-533158 --log_dir /tmp/nospam-533158 pause
--- PASS: TestErrorSpam/pause (1.09s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-533158 --log_dir /tmp/nospam-533158 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-533158 --log_dir /tmp/nospam-533158 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-533158 --log_dir /tmp/nospam-533158 unpause
--- PASS: TestErrorSpam/unpause (1.20s)

                                                
                                    
x
+
TestErrorSpam/stop (10.78s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-533158 --log_dir /tmp/nospam-533158 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-533158 --log_dir /tmp/nospam-533158 stop: (10.616776776s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-533158 --log_dir /tmp/nospam-533158 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-533158 --log_dir /tmp/nospam-533158 stop
--- PASS: TestErrorSpam/stop (10.78s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19734-3685/.minikube/files/etc/test/nested/copy/10447/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.53s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-479649 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-479649 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (59.529571112s)
--- PASS: TestFunctional/serial/StartWithProxy (59.53s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.92s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0930 10:35:48.454034   10447 config.go:182] Loaded profile config "functional-479649": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-479649 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-479649 --alsologtostderr -v=8: (27.917126867s)
functional_test.go:663: soft start took 27.919743871s for "functional-479649" cluster.
I0930 10:36:16.371577   10447 config.go:182] Loaded profile config "functional-479649": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (27.92s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-479649 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-479649 /tmp/TestFunctionalserialCacheCmdcacheadd_local172820057/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 cache add minikube-local-cache-test:functional-479649
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 cache delete minikube-local-cache-test:functional-479649
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-479649
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-479649 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (242.732376ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 kubectl -- --context functional-479649 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-479649 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.57s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-479649 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-479649 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.569855966s)
functional_test.go:761: restart took 38.569994075s for "functional-479649" cluster.
I0930 10:36:59.644230   10447 config.go:182] Loaded profile config "functional-479649": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (38.57s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-479649 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 logs
--- PASS: TestFunctional/serial/LogsCmd (0.91s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 logs --file /tmp/TestFunctionalserialLogsFileCmd3600072030/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.92s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-479649 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-479649
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-479649: exit status 115 (299.523818ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31168 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-479649 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-479649 config get cpus: exit status 14 (86.007799ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-479649 config get cpus: exit status 14 (45.91573ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-479649 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-479649 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 65520: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.67s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-479649 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-479649 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (172.686317ms)

                                                
                                                
-- stdout --
	* [functional-479649] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-3685/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3685/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:37:06.501208   63949 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:37:06.501323   63949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:37:06.501331   63949 out.go:358] Setting ErrFile to fd 2...
	I0930 10:37:06.501338   63949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:37:06.501629   63949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3685/.minikube/bin
	I0930 10:37:06.502363   63949 out.go:352] Setting JSON to false
	I0930 10:37:06.503704   63949 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1174,"bootTime":1727691452,"procs":336,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 10:37:06.503812   63949 start.go:139] virtualization: kvm guest
	I0930 10:37:06.505427   63949 out.go:177] * [functional-479649] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 10:37:06.506873   63949 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 10:37:06.506883   63949 notify.go:220] Checking for updates...
	I0930 10:37:06.509163   63949 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:37:06.510483   63949 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3685/kubeconfig
	I0930 10:37:06.511703   63949 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3685/.minikube
	I0930 10:37:06.512884   63949 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 10:37:06.514530   63949 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 10:37:06.516177   63949 config.go:182] Loaded profile config "functional-479649": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 10:37:06.517017   63949 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:37:06.545385   63949 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:37:06.545493   63949 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:37:06.613136   63949 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-30 10:37:06.60093266 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0930 10:37:06.613256   63949 docker.go:318] overlay module found
	I0930 10:37:06.615044   63949 out.go:177] * Using the docker driver based on existing profile
	I0930 10:37:06.616136   63949 start.go:297] selected driver: docker
	I0930 10:37:06.616150   63949 start.go:901] validating driver "docker" against &{Name:functional-479649 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-479649 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:37:06.616227   63949 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 10:37:06.617917   63949 out.go:201] 
	W0930 10:37:06.619093   63949 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0930 10:37:06.620374   63949 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-479649 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-479649 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-479649 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (186.450493ms)

                                                
                                                
-- stdout --
	* [functional-479649] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-3685/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3685/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:37:06.328759   63733 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:37:06.328899   63733 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:37:06.328910   63733 out.go:358] Setting ErrFile to fd 2...
	I0930 10:37:06.328917   63733 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:37:06.329200   63733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3685/.minikube/bin
	I0930 10:37:06.330019   63733 out.go:352] Setting JSON to false
	I0930 10:37:06.331249   63733 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1174,"bootTime":1727691452,"procs":330,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 10:37:06.331372   63733 start.go:139] virtualization: kvm guest
	I0930 10:37:06.333147   63733 out.go:177] * [functional-479649] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0930 10:37:06.334566   63733 notify.go:220] Checking for updates...
	I0930 10:37:06.334576   63733 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 10:37:06.335731   63733 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:37:06.337042   63733 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3685/kubeconfig
	I0930 10:37:06.338319   63733 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3685/.minikube
	I0930 10:37:06.339613   63733 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 10:37:06.340954   63733 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 10:37:06.342860   63733 config.go:182] Loaded profile config "functional-479649": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 10:37:06.343383   63733 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:37:06.369341   63733 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:37:06.369445   63733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:37:06.440552   63733 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-30 10:37:06.4281724 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0930 10:37:06.440682   63733 docker.go:318] overlay module found
	I0930 10:37:06.442519   63733 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0930 10:37:06.443660   63733 start.go:297] selected driver: docker
	I0930 10:37:06.443675   63733 start.go:901] validating driver "docker" against &{Name:functional-479649 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-479649 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:37:06.443771   63733 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 10:37:06.445676   63733 out.go:201] 
	W0930 10:37:06.446700   63733 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0930 10:37:06.447847   63733 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-479649 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-479649 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-p2l66" [dde8c898-09e8-4b58-a572-476a646ae2c5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-p2l66" [dde8c898-09e8-4b58-a572-476a646ae2c5] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.004426009s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32008
functional_test.go:1675: http://192.168.49.2:32008: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-p2l66

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32008
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ccc3fbfa-6beb-4494-b6e7-e3c8bba7c7c3] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004154157s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-479649 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-479649 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-479649 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-479649 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bb01c00f-2ece-4b84-a6bc-4b40b52f1010] Pending
helpers_test.go:344: "sp-pod" [bb01c00f-2ece-4b84-a6bc-4b40b52f1010] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bb01c00f-2ece-4b84-a6bc-4b40b52f1010] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00329331s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-479649 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-479649 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-479649 delete -f testdata/storage-provisioner/pod.yaml: (1.407286526s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-479649 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ebfec68e-ec5b-428c-a518-623ba5274189] Pending
helpers_test.go:344: "sp-pod" [ebfec68e-ec5b-428c-a518-623ba5274189] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ebfec68e-ec5b-428c-a518-623ba5274189] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.004959076s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-479649 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.19s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh -n functional-479649 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 cp functional-479649:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1023977229/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh -n functional-479649 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh -n functional-479649 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-479649 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-8ctmz" [0244ea2e-1db3-4465-99c3-8ceb3c486941] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-8ctmz" [0244ea2e-1db3-4465-99c3-8ceb3c486941] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.00425293s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-479649 exec mysql-6cdb49bbb-8ctmz -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-479649 exec mysql-6cdb49bbb-8ctmz -- mysql -ppassword -e "show databases;": exit status 1 (208.736681ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0930 10:37:37.171210   10447 retry.go:31] will retry after 1.409330933s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-479649 exec mysql-6cdb49bbb-8ctmz -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-479649 exec mysql-6cdb49bbb-8ctmz -- mysql -ppassword -e "show databases;": exit status 1 (109.377956ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0930 10:37:38.691178   10447 retry.go:31] will retry after 1.877553331s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-479649 exec mysql-6cdb49bbb-8ctmz -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.93s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/10447/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh "sudo cat /etc/test/nested/copy/10447/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/10447.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh "sudo cat /etc/ssl/certs/10447.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/10447.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh "sudo cat /usr/share/ca-certificates/10447.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/104472.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh "sudo cat /etc/ssl/certs/104472.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/104472.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh "sudo cat /usr/share/ca-certificates/104472.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-479649 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-479649 ssh "sudo systemctl is-active crio": exit status 1 (337.607079ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-479649 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-479649 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-hm6r9" [9773ad03-0141-4d63-8461-4e3b6facdbfc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-hm6r9" [9773ad03-0141-4d63-8461-4e3b6facdbfc] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003808079s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "398.557773ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "51.431666ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "439.080701ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "43.73024ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-479649 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-479649
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-479649
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-479649 image ls --format short --alsologtostderr:
I0930 10:37:31.940036   70945 out.go:345] Setting OutFile to fd 1 ...
I0930 10:37:31.940155   70945 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:37:31.940163   70945 out.go:358] Setting ErrFile to fd 2...
I0930 10:37:31.940167   70945 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:37:31.940345   70945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3685/.minikube/bin
I0930 10:37:31.940891   70945 config.go:182] Loaded profile config "functional-479649": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:37:31.940983   70945 config.go:182] Loaded profile config "functional-479649": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:37:31.941328   70945 cli_runner.go:164] Run: docker container inspect functional-479649 --format={{.State.Status}}
I0930 10:37:31.959076   70945 ssh_runner.go:195] Run: systemctl --version
I0930 10:37:31.959142   70945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-479649
I0930 10:37:31.980123   70945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/functional-479649/id_rsa Username:docker}
I0930 10:37:32.096737   70945 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-479649 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| docker.io/library/nginx                     | latest            | 9527c0f683c3b | 188MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-479649 | 8d15ea3085db1 | 30B    |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kicbase/echo-server               | functional-479649 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-479649 image ls --format table --alsologtostderr:
I0930 10:37:32.423859   71129 out.go:345] Setting OutFile to fd 1 ...
I0930 10:37:32.423985   71129 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:37:32.423995   71129 out.go:358] Setting ErrFile to fd 2...
I0930 10:37:32.424001   71129 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:37:32.424199   71129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3685/.minikube/bin
I0930 10:37:32.424850   71129 config.go:182] Loaded profile config "functional-479649": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:37:32.424961   71129 config.go:182] Loaded profile config "functional-479649": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:37:32.425354   71129 cli_runner.go:164] Run: docker container inspect functional-479649 --format={{.State.Status}}
I0930 10:37:32.441246   71129 ssh_runner.go:195] Run: systemctl --version
I0930 10:37:32.441298   71129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-479649
I0930 10:37:32.458859   71129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/functional-479649/id_rsa Username:docker}
I0930 10:37:32.548992   71129 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-479649 image ls --format json --alsologtostderr:
[{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1a
aea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"8d15ea3085db1b937cfa578a53c133172698e38cedcdcace63b4e8635320e9a8","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-479649"],"size":"30"},{"id":"9527c0f683c3b2f0465019f9f5456f01a0fc0d4d274466831b9910a21d0302
cd","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-479649"],"size":"
4940000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-479649 image ls --format json --alsologtostderr:
I0930 10:37:32.209527   70998 out.go:345] Setting OutFile to fd 1 ...
I0930 10:37:32.209659   70998 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:37:32.209671   70998 out.go:358] Setting ErrFile to fd 2...
I0930 10:37:32.209677   70998 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:37:32.209890   70998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3685/.minikube/bin
I0930 10:37:32.210505   70998 config.go:182] Loaded profile config "functional-479649": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:37:32.210614   70998 config.go:182] Loaded profile config "functional-479649": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:37:32.210988   70998 cli_runner.go:164] Run: docker container inspect functional-479649 --format={{.State.Status}}
I0930 10:37:32.227977   70998 ssh_runner.go:195] Run: systemctl --version
I0930 10:37:32.228017   70998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-479649
I0930 10:37:32.243847   70998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/functional-479649/id_rsa Username:docker}
I0930 10:37:32.349040   70998 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-479649 image ls --format yaml --alsologtostderr:
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 8d15ea3085db1b937cfa578a53c133172698e38cedcdcace63b4e8635320e9a8
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-479649
size: "30"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 9527c0f683c3b2f0465019f9f5456f01a0fc0d4d274466831b9910a21d0302cd
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 982741355094fc47a04a3380690240fa8d5f70b2e95832abb39708eee34cad29
repoDigests: []
repoTags:
- localhost/my-image:functional-479649
size: "1240000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-479649
size: "4940000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-479649 image ls --format yaml --alsologtostderr:
I0930 10:37:35.006659   71922 out.go:345] Setting OutFile to fd 1 ...
I0930 10:37:35.006760   71922 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:37:35.006766   71922 out.go:358] Setting ErrFile to fd 2...
I0930 10:37:35.006770   71922 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:37:35.006969   71922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3685/.minikube/bin
I0930 10:37:35.007524   71922 config.go:182] Loaded profile config "functional-479649": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:37:35.007656   71922 config.go:182] Loaded profile config "functional-479649": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:37:35.008055   71922 cli_runner.go:164] Run: docker container inspect functional-479649 --format={{.State.Status}}
I0930 10:37:35.027609   71922 ssh_runner.go:195] Run: systemctl --version
I0930 10:37:35.027654   71922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-479649
I0930 10:37:35.044483   71922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/functional-479649/id_rsa Username:docker}
I0930 10:37:35.132349   71922 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-479649 ssh pgrep buildkitd: exit status 1 (226.614328ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 image build -t localhost/my-image:functional-479649 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-479649 image build -t localhost/my-image:functional-479649 testdata/build --alsologtostderr: (1.940654656s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-479649 image build -t localhost/my-image:functional-479649 testdata/build --alsologtostderr:
I0930 10:37:32.851302   71317 out.go:345] Setting OutFile to fd 1 ...
I0930 10:37:32.851430   71317 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:37:32.851438   71317 out.go:358] Setting ErrFile to fd 2...
I0930 10:37:32.851443   71317 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:37:32.851649   71317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3685/.minikube/bin
I0930 10:37:32.852287   71317 config.go:182] Loaded profile config "functional-479649": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:37:32.852964   71317 config.go:182] Loaded profile config "functional-479649": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:37:32.853403   71317 cli_runner.go:164] Run: docker container inspect functional-479649 --format={{.State.Status}}
I0930 10:37:32.871467   71317 ssh_runner.go:195] Run: systemctl --version
I0930 10:37:32.871522   71317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-479649
I0930 10:37:32.887992   71317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/functional-479649/id_rsa Username:docker}
I0930 10:37:32.972580   71317 build_images.go:161] Building image from path: /tmp/build.3037683937.tar
I0930 10:37:32.972649   71317 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0930 10:37:32.981432   71317 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3037683937.tar
I0930 10:37:32.984789   71317 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3037683937.tar: stat -c "%s %y" /var/lib/minikube/build/build.3037683937.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3037683937.tar': No such file or directory
I0930 10:37:32.984818   71317 ssh_runner.go:362] scp /tmp/build.3037683937.tar --> /var/lib/minikube/build/build.3037683937.tar (3072 bytes)
I0930 10:37:33.006875   71317 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3037683937
I0930 10:37:33.014782   71317 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3037683937 -xf /var/lib/minikube/build/build.3037683937.tar
I0930 10:37:33.022765   71317 docker.go:360] Building image: /var/lib/minikube/build/build.3037683937
I0930 10:37:33.022830   71317 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-479649 /var/lib/minikube/build/build.3037683937
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:982741355094fc47a04a3380690240fa8d5f70b2e95832abb39708eee34cad29 done
#8 naming to localhost/my-image:functional-479649 done
#8 DONE 0.0s
I0930 10:37:34.728871   71317 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-479649 /var/lib/minikube/build/build.3037683937: (1.706009207s)
I0930 10:37:34.728935   71317 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3037683937
I0930 10:37:34.737368   71317 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3037683937.tar
I0930 10:37:34.745349   71317 build_images.go:217] Built localhost/my-image:functional-479649 from /tmp/build.3037683937.tar
I0930 10:37:34.745379   71317 build_images.go:133] succeeded building to: functional-479649
I0930 10:37:34.745385   71317 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-479649
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 image load --daemon kicbase/echo-server:functional-479649 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 image load --daemon kicbase/echo-server:functional-479649 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-479649 docker-env) && out/minikube-linux-amd64 status -p functional-479649"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-479649 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-479649
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 image load --daemon kicbase/echo-server:functional-479649 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 image save kicbase/echo-server:functional-479649 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 image rm kicbase/echo-server:functional-479649 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-479649
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 image save --daemon kicbase/echo-server:functional-479649 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-479649
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-479649 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-479649 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-479649 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 67444: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-479649 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-479649 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-479649 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [9a38578c-c2f1-4bb7-8d92-dd99cb0fddb2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [9a38578c-c2f1-4bb7-8d92-dd99cb0fddb2] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003050546s
I0930 10:37:23.982265   10447 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 service list
2024/09/30 10:37:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-linux-amd64 -p functional-479649 service list -o json: (1.336340035s)
functional_test.go:1494: Took "1.336453962s" to run "out/minikube-linux-amd64 -p functional-479649 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31760
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31760
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (14.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-479649 /tmp/TestFunctionalparallelMountCmdany-port1069928006/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727692640570100422" to /tmp/TestFunctionalparallelMountCmdany-port1069928006/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727692640570100422" to /tmp/TestFunctionalparallelMountCmdany-port1069928006/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727692640570100422" to /tmp/TestFunctionalparallelMountCmdany-port1069928006/001/test-1727692640570100422
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-479649 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (283.757931ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0930 10:37:20.854253   10447 retry.go:31] will retry after 402.280884ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 30 10:37 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 30 10:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 30 10:37 test-1727692640570100422
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh cat /mount-9p/test-1727692640570100422
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-479649 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0ce028e2-0adc-455f-8078-e3a0013cf3f6] Pending
helpers_test.go:344: "busybox-mount" [0ce028e2-0adc-455f-8078-e3a0013cf3f6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0ce028e2-0adc-455f-8078-e3a0013cf3f6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0ce028e2-0adc-455f-8078-e3a0013cf3f6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 12.003514802s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-479649 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-479649 /tmp/TestFunctionalparallelMountCmdany-port1069928006/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (14.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-479649 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.125.36 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-479649 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-479649 /tmp/TestFunctionalparallelMountCmdspecific-port2132963731/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-479649 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (230.390312ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0930 10:37:35.396985   10447 retry.go:31] will retry after 408.754203ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-479649 /tmp/TestFunctionalparallelMountCmdspecific-port2132963731/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-479649 ssh "sudo umount -f /mount-9p": exit status 1 (241.738153ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-479649 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-479649 /tmp/TestFunctionalparallelMountCmdspecific-port2132963731/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-479649 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3561704322/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-479649 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3561704322/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-479649 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3561704322/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-479649 ssh "findmnt -T" /mount1: exit status 1 (315.264362ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0930 10:37:37.056147   10447 retry.go:31] will retry after 352.682154ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-479649 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-479649 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-479649 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3561704322/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-479649 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3561704322/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-479649 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3561704322/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-479649
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-479649
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-479649
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (100.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-460160 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0930 10:39:01.561359   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:01.567729   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:01.579097   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:01.600510   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:01.641885   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:01.723224   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:01.884757   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:02.206600   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:02.848615   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:04.130188   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:06.691977   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:11.813405   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:22.055328   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-460160 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m39.914016212s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (100.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-460160 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-460160 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-460160 -- rollout status deployment/busybox: (3.043179982s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-460160 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-460160 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-460160 -- exec busybox-7dff88458-9h9hd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-460160 -- exec busybox-7dff88458-dvq4s -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-460160 -- exec busybox-7dff88458-pr28p -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-460160 -- exec busybox-7dff88458-9h9hd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-460160 -- exec busybox-7dff88458-dvq4s -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-460160 -- exec busybox-7dff88458-pr28p -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-460160 -- exec busybox-7dff88458-9h9hd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-460160 -- exec busybox-7dff88458-dvq4s -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-460160 -- exec busybox-7dff88458-pr28p -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-460160 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-460160 -- exec busybox-7dff88458-9h9hd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-460160 -- exec busybox-7dff88458-9h9hd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-460160 -- exec busybox-7dff88458-dvq4s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-460160 -- exec busybox-7dff88458-dvq4s -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-460160 -- exec busybox-7dff88458-pr28p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-460160 -- exec busybox-7dff88458-pr28p -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (22.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-460160 -v=7 --alsologtostderr
E0930 10:39:42.537083   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-460160 -v=7 --alsologtostderr: (22.067846431s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (22.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-460160 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 cp testdata/cp-test.txt ha-460160:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 cp ha-460160:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2032688162/001/cp-test_ha-460160.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 cp ha-460160:/home/docker/cp-test.txt ha-460160-m02:/home/docker/cp-test_ha-460160_ha-460160-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m02 "sudo cat /home/docker/cp-test_ha-460160_ha-460160-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 cp ha-460160:/home/docker/cp-test.txt ha-460160-m03:/home/docker/cp-test_ha-460160_ha-460160-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m03 "sudo cat /home/docker/cp-test_ha-460160_ha-460160-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 cp ha-460160:/home/docker/cp-test.txt ha-460160-m04:/home/docker/cp-test_ha-460160_ha-460160-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m04 "sudo cat /home/docker/cp-test_ha-460160_ha-460160-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 cp testdata/cp-test.txt ha-460160-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 cp ha-460160-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2032688162/001/cp-test_ha-460160-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 cp ha-460160-m02:/home/docker/cp-test.txt ha-460160:/home/docker/cp-test_ha-460160-m02_ha-460160.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160 "sudo cat /home/docker/cp-test_ha-460160-m02_ha-460160.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 cp ha-460160-m02:/home/docker/cp-test.txt ha-460160-m03:/home/docker/cp-test_ha-460160-m02_ha-460160-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m03 "sudo cat /home/docker/cp-test_ha-460160-m02_ha-460160-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 cp ha-460160-m02:/home/docker/cp-test.txt ha-460160-m04:/home/docker/cp-test_ha-460160-m02_ha-460160-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m04 "sudo cat /home/docker/cp-test_ha-460160-m02_ha-460160-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 cp testdata/cp-test.txt ha-460160-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 cp ha-460160-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2032688162/001/cp-test_ha-460160-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 cp ha-460160-m03:/home/docker/cp-test.txt ha-460160:/home/docker/cp-test_ha-460160-m03_ha-460160.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160 "sudo cat /home/docker/cp-test_ha-460160-m03_ha-460160.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 cp ha-460160-m03:/home/docker/cp-test.txt ha-460160-m02:/home/docker/cp-test_ha-460160-m03_ha-460160-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m02 "sudo cat /home/docker/cp-test_ha-460160-m03_ha-460160-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 cp ha-460160-m03:/home/docker/cp-test.txt ha-460160-m04:/home/docker/cp-test_ha-460160-m03_ha-460160-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m04 "sudo cat /home/docker/cp-test_ha-460160-m03_ha-460160-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 cp testdata/cp-test.txt ha-460160-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 cp ha-460160-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2032688162/001/cp-test_ha-460160-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 cp ha-460160-m04:/home/docker/cp-test.txt ha-460160:/home/docker/cp-test_ha-460160-m04_ha-460160.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160 "sudo cat /home/docker/cp-test_ha-460160-m04_ha-460160.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 cp ha-460160-m04:/home/docker/cp-test.txt ha-460160-m02:/home/docker/cp-test_ha-460160-m04_ha-460160-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m02 "sudo cat /home/docker/cp-test_ha-460160-m04_ha-460160-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 cp ha-460160-m04:/home/docker/cp-test.txt ha-460160-m03:/home/docker/cp-test_ha-460160-m04_ha-460160-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 ssh -n ha-460160-m03 "sudo cat /home/docker/cp-test_ha-460160-m04_ha-460160-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-460160 node stop m02 -v=7 --alsologtostderr: (10.784780446s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-460160 status -v=7 --alsologtostderr: exit status 7 (603.983408ms)

                                                
                                                
-- stdout --
	ha-460160
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-460160-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-460160-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-460160-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:40:19.207019  100142 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:40:19.207295  100142 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:40:19.207308  100142 out.go:358] Setting ErrFile to fd 2...
	I0930 10:40:19.207314  100142 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:40:19.207543  100142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3685/.minikube/bin
	I0930 10:40:19.207765  100142 out.go:352] Setting JSON to false
	I0930 10:40:19.207792  100142 mustload.go:65] Loading cluster: ha-460160
	I0930 10:40:19.207883  100142 notify.go:220] Checking for updates...
	I0930 10:40:19.208258  100142 config.go:182] Loaded profile config "ha-460160": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 10:40:19.208276  100142 status.go:174] checking status of ha-460160 ...
	I0930 10:40:19.208759  100142 cli_runner.go:164] Run: docker container inspect ha-460160 --format={{.State.Status}}
	I0930 10:40:19.227177  100142 status.go:364] ha-460160 host status = "Running" (err=<nil>)
	I0930 10:40:19.227204  100142 host.go:66] Checking if "ha-460160" exists ...
	I0930 10:40:19.227538  100142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-460160
	I0930 10:40:19.246163  100142 host.go:66] Checking if "ha-460160" exists ...
	I0930 10:40:19.246457  100142 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:40:19.246507  100142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-460160
	I0930 10:40:19.263984  100142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/ha-460160/id_rsa Username:docker}
	I0930 10:40:19.345252  100142 ssh_runner.go:195] Run: systemctl --version
	I0930 10:40:19.348857  100142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:40:19.358827  100142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:40:19.403870  100142 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-30 10:40:19.39429183 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0930 10:40:19.404453  100142 kubeconfig.go:125] found "ha-460160" server: "https://192.168.49.254:8443"
	I0930 10:40:19.404481  100142 api_server.go:166] Checking apiserver status ...
	I0930 10:40:19.404518  100142 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:40:19.415000  100142 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2423/cgroup
	I0930 10:40:19.423723  100142 api_server.go:182] apiserver freezer: "2:freezer:/docker/f8188ff1e81144ac526d4b8a3b5f6ab7e7e42c8ebe427175c86f9f809ed5c328/kubepods/burstable/pode635297b2bea145c89f887d9f29e52fc/194108ba9c1a5074fd239e640296cd54a583159ebf28127d4dbfb9f31b28e9ac"
	I0930 10:40:19.423797  100142 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f8188ff1e81144ac526d4b8a3b5f6ab7e7e42c8ebe427175c86f9f809ed5c328/kubepods/burstable/pode635297b2bea145c89f887d9f29e52fc/194108ba9c1a5074fd239e640296cd54a583159ebf28127d4dbfb9f31b28e9ac/freezer.state
	I0930 10:40:19.431209  100142 api_server.go:204] freezer state: "THAWED"
	I0930 10:40:19.431237  100142 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0930 10:40:19.434679  100142 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0930 10:40:19.434697  100142 status.go:456] ha-460160 apiserver status = Running (err=<nil>)
	I0930 10:40:19.434706  100142 status.go:176] ha-460160 status: &{Name:ha-460160 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:40:19.434719  100142 status.go:174] checking status of ha-460160-m02 ...
	I0930 10:40:19.434931  100142 cli_runner.go:164] Run: docker container inspect ha-460160-m02 --format={{.State.Status}}
	I0930 10:40:19.451612  100142 status.go:364] ha-460160-m02 host status = "Stopped" (err=<nil>)
	I0930 10:40:19.451634  100142 status.go:377] host is not running, skipping remaining checks
	I0930 10:40:19.451641  100142 status.go:176] ha-460160-m02 status: &{Name:ha-460160-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:40:19.451664  100142 status.go:174] checking status of ha-460160-m03 ...
	I0930 10:40:19.451915  100142 cli_runner.go:164] Run: docker container inspect ha-460160-m03 --format={{.State.Status}}
	I0930 10:40:19.469218  100142 status.go:364] ha-460160-m03 host status = "Running" (err=<nil>)
	I0930 10:40:19.469242  100142 host.go:66] Checking if "ha-460160-m03" exists ...
	I0930 10:40:19.469479  100142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-460160-m03
	I0930 10:40:19.486966  100142 host.go:66] Checking if "ha-460160-m03" exists ...
	I0930 10:40:19.487225  100142 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:40:19.487260  100142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-460160-m03
	I0930 10:40:19.503694  100142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/ha-460160-m03/id_rsa Username:docker}
	I0930 10:40:19.585122  100142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:40:19.595598  100142 kubeconfig.go:125] found "ha-460160" server: "https://192.168.49.254:8443"
	I0930 10:40:19.595622  100142 api_server.go:166] Checking apiserver status ...
	I0930 10:40:19.595653  100142 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:40:19.605343  100142 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2296/cgroup
	I0930 10:40:19.613590  100142 api_server.go:182] apiserver freezer: "2:freezer:/docker/ef8267e3e59e7b4f945acc42f22f6d5f6da11c9e4e809d3423fedfed0c3e540e/kubepods/burstable/pod08395af2e2ce8b4a9ac66dddec52ad01/1fd1f53a83d2c858dbb35b737744def4ebb42c9691f3b4ac374a02e40d1a338b"
	I0930 10:40:19.613647  100142 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ef8267e3e59e7b4f945acc42f22f6d5f6da11c9e4e809d3423fedfed0c3e540e/kubepods/burstable/pod08395af2e2ce8b4a9ac66dddec52ad01/1fd1f53a83d2c858dbb35b737744def4ebb42c9691f3b4ac374a02e40d1a338b/freezer.state
	I0930 10:40:19.620982  100142 api_server.go:204] freezer state: "THAWED"
	I0930 10:40:19.621009  100142 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0930 10:40:19.624415  100142 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0930 10:40:19.624434  100142 status.go:456] ha-460160-m03 apiserver status = Running (err=<nil>)
	I0930 10:40:19.624442  100142 status.go:176] ha-460160-m03 status: &{Name:ha-460160-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:40:19.624469  100142 status.go:174] checking status of ha-460160-m04 ...
	I0930 10:40:19.624747  100142 cli_runner.go:164] Run: docker container inspect ha-460160-m04 --format={{.State.Status}}
	I0930 10:40:19.643019  100142 status.go:364] ha-460160-m04 host status = "Running" (err=<nil>)
	I0930 10:40:19.643039  100142 host.go:66] Checking if "ha-460160-m04" exists ...
	I0930 10:40:19.643263  100142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-460160-m04
	I0930 10:40:19.659919  100142 host.go:66] Checking if "ha-460160-m04" exists ...
	I0930 10:40:19.660158  100142 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:40:19.660195  100142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-460160-m04
	I0930 10:40:19.676162  100142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/ha-460160-m04/id_rsa Username:docker}
	I0930 10:40:19.757258  100142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:40:19.767468  100142 status.go:176] ha-460160-m04 status: &{Name:ha-460160-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (111.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 node start m02 -v=7 --alsologtostderr
E0930 10:40:23.498484   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:41:45.420828   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:06.108261   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:06.114630   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:06.125984   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:06.147395   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:06.188761   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:06.270188   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:06.431715   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:06.753303   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:07.395325   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:08.677187   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-460160 node start m02 -v=7 --alsologtostderr: (1m50.571885879s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 status -v=7 --alsologtostderr
E0930 10:42:11.239463   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (111.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (169.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-460160 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-460160 -v=7 --alsologtostderr
E0930 10:42:16.361116   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:42:26.603329   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-460160 -v=7 --alsologtostderr: (33.449974575s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-460160 --wait=true -v=7 --alsologtostderr
E0930 10:42:47.085553   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:43:28.047774   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:44:01.561172   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:44:29.262342   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:44:49.969056   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-460160 --wait=true -v=7 --alsologtostderr: (2m16.413817641s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-460160
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (169.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-460160 node delete m03 -v=7 --alsologtostderr: (8.396479984s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-460160 stop -v=7 --alsologtostderr: (32.151066919s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-460160 status -v=7 --alsologtostderr: exit status 7 (92.95116ms)

                                                
                                                
-- stdout --
	ha-460160
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-460160-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-460160-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:45:44.416961  131249 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:45:44.417079  131249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:45:44.417088  131249 out.go:358] Setting ErrFile to fd 2...
	I0930 10:45:44.417092  131249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:45:44.417263  131249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3685/.minikube/bin
	I0930 10:45:44.417418  131249 out.go:352] Setting JSON to false
	I0930 10:45:44.417442  131249 mustload.go:65] Loading cluster: ha-460160
	I0930 10:45:44.417504  131249 notify.go:220] Checking for updates...
	I0930 10:45:44.417775  131249 config.go:182] Loaded profile config "ha-460160": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 10:45:44.417790  131249 status.go:174] checking status of ha-460160 ...
	I0930 10:45:44.418243  131249 cli_runner.go:164] Run: docker container inspect ha-460160 --format={{.State.Status}}
	I0930 10:45:44.436418  131249 status.go:364] ha-460160 host status = "Stopped" (err=<nil>)
	I0930 10:45:44.436440  131249 status.go:377] host is not running, skipping remaining checks
	I0930 10:45:44.436450  131249 status.go:176] ha-460160 status: &{Name:ha-460160 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:45:44.436475  131249 status.go:174] checking status of ha-460160-m02 ...
	I0930 10:45:44.436810  131249 cli_runner.go:164] Run: docker container inspect ha-460160-m02 --format={{.State.Status}}
	I0930 10:45:44.453321  131249 status.go:364] ha-460160-m02 host status = "Stopped" (err=<nil>)
	I0930 10:45:44.453341  131249 status.go:377] host is not running, skipping remaining checks
	I0930 10:45:44.453348  131249 status.go:176] ha-460160-m02 status: &{Name:ha-460160-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:45:44.453371  131249 status.go:174] checking status of ha-460160-m04 ...
	I0930 10:45:44.453674  131249 cli_runner.go:164] Run: docker container inspect ha-460160-m04 --format={{.State.Status}}
	I0930 10:45:44.469280  131249 status.go:364] ha-460160-m04 host status = "Stopped" (err=<nil>)
	I0930 10:45:44.469297  131249 status.go:377] host is not running, skipping remaining checks
	I0930 10:45:44.469303  131249 status.go:176] ha-460160-m04 status: &{Name:ha-460160-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (94.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-460160 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0930 10:47:06.108663   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-460160 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m33.96218827s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (94.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (31.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-460160 --control-plane -v=7 --alsologtostderr
E0930 10:47:33.810433   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-460160 --control-plane -v=7 --alsologtostderr: (30.807984817s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-460160 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (31.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (23.16s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-334236 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-334236 --driver=docker  --container-runtime=docker: (23.164476295s)
--- PASS: TestImageBuild/serial/Setup (23.16s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.31s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-334236
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-334236: (1.306865849s)
--- PASS: TestImageBuild/serial/NormalBuild (1.31s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-334236
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.74s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.52s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-334236
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.52s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.54s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-334236
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (64.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-683892 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0930 10:49:01.561633   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-683892 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m4.914108003s)
--- PASS: TestJSONOutput/start/Command (64.91s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-683892 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.39s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-683892 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.39s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-683892 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-683892 --output=json --user=testUser: (10.874144041s)
--- PASS: TestJSONOutput/stop/Command (10.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-295454 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-295454 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.110195ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d7b0eb04-f133-4031-9c74-052c03cbfcc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-295454] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f3bf6f58-88d0-4b96-8112-bc0ad30cd8b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19734"}}
	{"specversion":"1.0","id":"096db1d1-9ae9-4fff-b0cd-9749aae8df27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6b65fea8-18fb-4eb2-b915-6900aada23b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19734-3685/kubeconfig"}}
	{"specversion":"1.0","id":"e0da07e3-9d7a-45a1-988c-6a46b244d1b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3685/.minikube"}}
	{"specversion":"1.0","id":"1914b5aa-6bb3-48a3-bc39-daa0aa3e1caf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b68666e8-3449-4fb6-af4a-67d15c700ac7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e1c64bc9-4108-49e5-9cb0-c467b57e31b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-295454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-295454
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (22.44s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-591407 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-591407 --network=: (20.509770902s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-591407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-591407
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-591407: (1.9140881s)
--- PASS: TestKicCustomNetwork/create_custom_network (22.44s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.47s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-075222 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-075222 --network=bridge: (20.641752282s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-075222" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-075222
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-075222: (1.814255628s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.47s)

                                                
                                    
x
+
TestKicExistingNetwork (25.45s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0930 10:50:31.250875   10447 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0930 10:50:31.266696   10447 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0930 10:50:31.266764   10447 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0930 10:50:31.266783   10447 cli_runner.go:164] Run: docker network inspect existing-network
W0930 10:50:31.283055   10447 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0930 10:50:31.283082   10447 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0930 10:50:31.283099   10447 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0930 10:50:31.283221   10447 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0930 10:50:31.299615   10447 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d51ef21d3a79 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:74:5b:d5:54} reservation:<nil>}
I0930 10:50:31.300128   10447 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00189b5e0}
I0930 10:50:31.300166   10447 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0930 10:50:31.300232   10447 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0930 10:50:31.358093   10447 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-113567 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-113567 --network=existing-network: (23.521972684s)
helpers_test.go:175: Cleaning up "existing-network-113567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-113567
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-113567: (1.791986435s)
I0930 10:50:56.687977   10447 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.45s)

                                                
                                    
x
+
TestKicCustomSubnet (25.65s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-077589 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-077589 --subnet=192.168.60.0/24: (23.706605552s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-077589 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-077589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-077589
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-077589: (1.928033283s)
--- PASS: TestKicCustomSubnet (25.65s)

                                                
                                    
x
+
TestKicStaticIP (22.84s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-135087 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-135087 --static-ip=192.168.200.200: (20.799501035s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-135087 ip
helpers_test.go:175: Cleaning up "static-ip-135087" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-135087
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-135087: (1.930738844s)
--- PASS: TestKicStaticIP (22.84s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (49.05s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-564070 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-564070 --driver=docker  --container-runtime=docker: (19.689677115s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-578535 --driver=docker  --container-runtime=docker
E0930 10:52:06.107678   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-578535 --driver=docker  --container-runtime=docker: (24.207814441s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-564070
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-578535
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-578535" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-578535
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-578535: (2.046406021s)
helpers_test.go:175: Cleaning up "first-564070" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-564070
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-564070: (2.041616374s)
--- PASS: TestMinikubeProfile (49.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-436096 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-436096 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.172574419s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-436096 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-450386 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-450386 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.158846601s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-450386 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.22s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.42s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-436096 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-436096 --alsologtostderr -v=5: (1.417445166s)
--- PASS: TestMountStart/serial/DeleteFirst (1.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-450386 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.22s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-450386
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-450386: (1.162701303s)
--- PASS: TestMountStart/serial/Stop (1.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.58s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-450386
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-450386: (6.577822881s)
--- PASS: TestMountStart/serial/RestartStopped (7.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-450386 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (56.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-563899 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-563899 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (55.636293847s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (56.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (35.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-563899 -- rollout status deployment/busybox: (2.085869532s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0930 10:53:57.668254   10447 retry.go:31] will retry after 1.296136862s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0930 10:53:59.068463   10447 retry.go:31] will retry after 2.09983134s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0930 10:54:01.272442   10447 retry.go:31] will retry after 3.215841732s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0930 10:54:01.560871   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0930 10:54:04.593324   10447 retry.go:31] will retry after 2.079067349s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0930 10:54:06.778900   10447 retry.go:31] will retry after 5.505515201s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0930 10:54:12.389004   10447 retry.go:31] will retry after 7.187664461s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0930 10:54:19.685299   10447 retry.go:31] will retry after 9.992991787s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- exec busybox-7dff88458-nxbsd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- exec busybox-7dff88458-x48gm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- exec busybox-7dff88458-nxbsd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- exec busybox-7dff88458-x48gm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- exec busybox-7dff88458-nxbsd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- exec busybox-7dff88458-x48gm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (35.51s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- exec busybox-7dff88458-nxbsd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- exec busybox-7dff88458-nxbsd -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- exec busybox-7dff88458-x48gm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563899 -- exec busybox-7dff88458-x48gm -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-563899 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-563899 -v 3 --alsologtostderr: (16.654932765s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.21s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-563899 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 cp testdata/cp-test.txt multinode-563899:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 ssh -n multinode-563899 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 cp multinode-563899:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2611763651/001/cp-test_multinode-563899.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 ssh -n multinode-563899 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 cp multinode-563899:/home/docker/cp-test.txt multinode-563899-m02:/home/docker/cp-test_multinode-563899_multinode-563899-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 ssh -n multinode-563899 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 ssh -n multinode-563899-m02 "sudo cat /home/docker/cp-test_multinode-563899_multinode-563899-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 cp multinode-563899:/home/docker/cp-test.txt multinode-563899-m03:/home/docker/cp-test_multinode-563899_multinode-563899-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 ssh -n multinode-563899 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 ssh -n multinode-563899-m03 "sudo cat /home/docker/cp-test_multinode-563899_multinode-563899-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 cp testdata/cp-test.txt multinode-563899-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 ssh -n multinode-563899-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 cp multinode-563899-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2611763651/001/cp-test_multinode-563899-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 ssh -n multinode-563899-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 cp multinode-563899-m02:/home/docker/cp-test.txt multinode-563899:/home/docker/cp-test_multinode-563899-m02_multinode-563899.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 ssh -n multinode-563899-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 ssh -n multinode-563899 "sudo cat /home/docker/cp-test_multinode-563899-m02_multinode-563899.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 cp multinode-563899-m02:/home/docker/cp-test.txt multinode-563899-m03:/home/docker/cp-test_multinode-563899-m02_multinode-563899-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 ssh -n multinode-563899-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 ssh -n multinode-563899-m03 "sudo cat /home/docker/cp-test_multinode-563899-m02_multinode-563899-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 cp testdata/cp-test.txt multinode-563899-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 ssh -n multinode-563899-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 cp multinode-563899-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2611763651/001/cp-test_multinode-563899-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 ssh -n multinode-563899-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 cp multinode-563899-m03:/home/docker/cp-test.txt multinode-563899:/home/docker/cp-test_multinode-563899-m03_multinode-563899.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 ssh -n multinode-563899-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 ssh -n multinode-563899 "sudo cat /home/docker/cp-test_multinode-563899-m03_multinode-563899.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 cp multinode-563899-m03:/home/docker/cp-test.txt multinode-563899-m02:/home/docker/cp-test_multinode-563899-m03_multinode-563899-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 ssh -n multinode-563899-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 ssh -n multinode-563899-m02 "sudo cat /home/docker/cp-test_multinode-563899-m03_multinode-563899-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-563899 node stop m03: (1.16516721s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-563899 status: exit status 7 (425.821696ms)

                                                
                                                
-- stdout --
	multinode-563899
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-563899-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-563899-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-563899 status --alsologtostderr: exit status 7 (428.809051ms)

                                                
                                                
-- stdout --
	multinode-563899
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-563899-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-563899-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:54:59.260684  217392 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:54:59.260908  217392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:54:59.260916  217392 out.go:358] Setting ErrFile to fd 2...
	I0930 10:54:59.260920  217392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:54:59.261114  217392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3685/.minikube/bin
	I0930 10:54:59.261272  217392 out.go:352] Setting JSON to false
	I0930 10:54:59.261293  217392 mustload.go:65] Loading cluster: multinode-563899
	I0930 10:54:59.261400  217392 notify.go:220] Checking for updates...
	I0930 10:54:59.261644  217392 config.go:182] Loaded profile config "multinode-563899": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 10:54:59.261661  217392 status.go:174] checking status of multinode-563899 ...
	I0930 10:54:59.262095  217392 cli_runner.go:164] Run: docker container inspect multinode-563899 --format={{.State.Status}}
	I0930 10:54:59.282036  217392 status.go:364] multinode-563899 host status = "Running" (err=<nil>)
	I0930 10:54:59.282057  217392 host.go:66] Checking if "multinode-563899" exists ...
	I0930 10:54:59.282294  217392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-563899
	I0930 10:54:59.298947  217392 host.go:66] Checking if "multinode-563899" exists ...
	I0930 10:54:59.299218  217392 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:54:59.299268  217392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-563899
	I0930 10:54:59.315137  217392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/multinode-563899/id_rsa Username:docker}
	I0930 10:54:59.397326  217392 ssh_runner.go:195] Run: systemctl --version
	I0930 10:54:59.401084  217392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:54:59.411399  217392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:54:59.457754  217392 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-30 10:54:59.448010015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0930 10:54:59.458335  217392 kubeconfig.go:125] found "multinode-563899" server: "https://192.168.67.2:8443"
	I0930 10:54:59.458361  217392 api_server.go:166] Checking apiserver status ...
	I0930 10:54:59.458394  217392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:54:59.469320  217392 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2335/cgroup
	I0930 10:54:59.477878  217392 api_server.go:182] apiserver freezer: "2:freezer:/docker/dabaea9bf32d4c075f041bd685a0d6ce4d7b2abb01a1406ba578064392699923/kubepods/burstable/pod3c5d6eaf4eb24fae33603c20fb2f6e2e/9357f233cd152bf1d69c6c9db0531e15d292fa7f7b4bd5a455755854a4d37e44"
	I0930 10:54:59.477943  217392 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/dabaea9bf32d4c075f041bd685a0d6ce4d7b2abb01a1406ba578064392699923/kubepods/burstable/pod3c5d6eaf4eb24fae33603c20fb2f6e2e/9357f233cd152bf1d69c6c9db0531e15d292fa7f7b4bd5a455755854a4d37e44/freezer.state
	I0930 10:54:59.485738  217392 api_server.go:204] freezer state: "THAWED"
	I0930 10:54:59.485762  217392 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0930 10:54:59.489270  217392 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0930 10:54:59.489305  217392 status.go:456] multinode-563899 apiserver status = Running (err=<nil>)
	I0930 10:54:59.489317  217392 status.go:176] multinode-563899 status: &{Name:multinode-563899 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:54:59.489337  217392 status.go:174] checking status of multinode-563899-m02 ...
	I0930 10:54:59.489617  217392 cli_runner.go:164] Run: docker container inspect multinode-563899-m02 --format={{.State.Status}}
	I0930 10:54:59.506604  217392 status.go:364] multinode-563899-m02 host status = "Running" (err=<nil>)
	I0930 10:54:59.506625  217392 host.go:66] Checking if "multinode-563899-m02" exists ...
	I0930 10:54:59.506865  217392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-563899-m02
	I0930 10:54:59.523738  217392 host.go:66] Checking if "multinode-563899-m02" exists ...
	I0930 10:54:59.523977  217392 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:54:59.524019  217392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-563899-m02
	I0930 10:54:59.540719  217392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19734-3685/.minikube/machines/multinode-563899-m02/id_rsa Username:docker}
	I0930 10:54:59.621237  217392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:54:59.631333  217392 status.go:176] multinode-563899-m02 status: &{Name:multinode-563899-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:54:59.631378  217392 status.go:174] checking status of multinode-563899-m03 ...
	I0930 10:54:59.631641  217392 cli_runner.go:164] Run: docker container inspect multinode-563899-m03 --format={{.State.Status}}
	I0930 10:54:59.648637  217392 status.go:364] multinode-563899-m03 host status = "Stopped" (err=<nil>)
	I0930 10:54:59.648659  217392 status.go:377] host is not running, skipping remaining checks
	I0930 10:54:59.648667  217392 status.go:176] multinode-563899-m03 status: &{Name:multinode-563899-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.02s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-563899 node start m03 -v=7 --alsologtostderr: (8.8558992s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (104.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-563899
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-563899
E0930 10:55:24.625648   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-563899: (22.200080481s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-563899 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-563899 --wait=true -v=8 --alsologtostderr: (1m22.279092961s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-563899
--- PASS: TestMultiNode/serial/RestartKeepsNodes (104.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-563899 node delete m03: (4.503580522s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.08s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 stop
E0930 10:57:06.108837   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-563899 stop: (21.123487276s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-563899 status: exit status 7 (80.348717ms)

                                                
                                                
-- stdout --
	multinode-563899
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-563899-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-563899 status --alsologtostderr: exit status 7 (82.099092ms)

                                                
                                                
-- stdout --
	multinode-563899
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-563899-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:57:20.033274  232605 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:57:20.033377  232605 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:57:20.033385  232605 out.go:358] Setting ErrFile to fd 2...
	I0930 10:57:20.033390  232605 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:57:20.033561  232605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3685/.minikube/bin
	I0930 10:57:20.033719  232605 out.go:352] Setting JSON to false
	I0930 10:57:20.033743  232605 mustload.go:65] Loading cluster: multinode-563899
	I0930 10:57:20.033844  232605 notify.go:220] Checking for updates...
	I0930 10:57:20.034169  232605 config.go:182] Loaded profile config "multinode-563899": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 10:57:20.034187  232605 status.go:174] checking status of multinode-563899 ...
	I0930 10:57:20.034594  232605 cli_runner.go:164] Run: docker container inspect multinode-563899 --format={{.State.Status}}
	I0930 10:57:20.053920  232605 status.go:364] multinode-563899 host status = "Stopped" (err=<nil>)
	I0930 10:57:20.053939  232605 status.go:377] host is not running, skipping remaining checks
	I0930 10:57:20.053946  232605 status.go:176] multinode-563899 status: &{Name:multinode-563899 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:57:20.053978  232605 status.go:174] checking status of multinode-563899-m02 ...
	I0930 10:57:20.054211  232605 cli_runner.go:164] Run: docker container inspect multinode-563899-m02 --format={{.State.Status}}
	I0930 10:57:20.072261  232605 status.go:364] multinode-563899-m02 host status = "Stopped" (err=<nil>)
	I0930 10:57:20.072280  232605 status.go:377] host is not running, skipping remaining checks
	I0930 10:57:20.072285  232605 status.go:176] multinode-563899-m02 status: &{Name:multinode-563899-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-563899 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-563899 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (52.059664405s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563899 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.59s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-563899
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-563899-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-563899-m02 --driver=docker  --container-runtime=docker: exit status 14 (57.525882ms)

                                                
                                                
-- stdout --
	* [multinode-563899-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-3685/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3685/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-563899-m02' is duplicated with machine name 'multinode-563899-m02' in profile 'multinode-563899'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-563899-m03 --driver=docker  --container-runtime=docker
E0930 10:58:29.172528   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-563899-m03 --driver=docker  --container-runtime=docker: (20.669465079s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-563899
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-563899: exit status 80 (251.953594ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-563899 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-563899-m03 already exists in multinode-563899-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-563899-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-563899-m03: (2.032976038s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.05s)

                                                
                                    
x
+
TestPreload (84.99s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-551270 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0930 10:59:01.561594   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-551270 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (50.857370911s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-551270 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-551270 image pull gcr.io/k8s-minikube/busybox: (1.45439604s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-551270
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-551270: (10.70924498s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-551270 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-551270 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (19.566589932s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-551270 image list
helpers_test.go:175: Cleaning up "test-preload-551270" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-551270
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-551270: (2.180837779s)
--- PASS: TestPreload (84.99s)

                                                
                                    
x
+
TestScheduledStopUnix (94.12s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-575689 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-575689 --memory=2048 --driver=docker  --container-runtime=docker: (21.28673216s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-575689 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-575689 -n scheduled-stop-575689
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-575689 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0930 11:00:26.120681   10447 retry.go:31] will retry after 76.665µs: open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/scheduled-stop-575689/pid: no such file or directory
I0930 11:00:26.121819   10447 retry.go:31] will retry after 122.157µs: open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/scheduled-stop-575689/pid: no such file or directory
I0930 11:00:26.122952   10447 retry.go:31] will retry after 226.784µs: open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/scheduled-stop-575689/pid: no such file or directory
I0930 11:00:26.124074   10447 retry.go:31] will retry after 319.989µs: open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/scheduled-stop-575689/pid: no such file or directory
I0930 11:00:26.125192   10447 retry.go:31] will retry after 337.957µs: open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/scheduled-stop-575689/pid: no such file or directory
I0930 11:00:26.126295   10447 retry.go:31] will retry after 867.142µs: open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/scheduled-stop-575689/pid: no such file or directory
I0930 11:00:26.127421   10447 retry.go:31] will retry after 764.374µs: open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/scheduled-stop-575689/pid: no such file or directory
I0930 11:00:26.128564   10447 retry.go:31] will retry after 1.032371ms: open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/scheduled-stop-575689/pid: no such file or directory
I0930 11:00:26.129687   10447 retry.go:31] will retry after 1.387472ms: open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/scheduled-stop-575689/pid: no such file or directory
I0930 11:00:26.131903   10447 retry.go:31] will retry after 2.837342ms: open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/scheduled-stop-575689/pid: no such file or directory
I0930 11:00:26.135100   10447 retry.go:31] will retry after 4.708942ms: open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/scheduled-stop-575689/pid: no such file or directory
I0930 11:00:26.140298   10447 retry.go:31] will retry after 7.209507ms: open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/scheduled-stop-575689/pid: no such file or directory
I0930 11:00:26.148529   10447 retry.go:31] will retry after 15.363178ms: open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/scheduled-stop-575689/pid: no such file or directory
I0930 11:00:26.164749   10447 retry.go:31] will retry after 28.079272ms: open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/scheduled-stop-575689/pid: no such file or directory
I0930 11:00:26.193042   10447 retry.go:31] will retry after 30.42883ms: open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/scheduled-stop-575689/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-575689 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-575689 -n scheduled-stop-575689
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-575689
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-575689 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-575689
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-575689: exit status 7 (60.588676ms)

                                                
                                                
-- stdout --
	scheduled-stop-575689
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-575689 -n scheduled-stop-575689
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-575689 -n scheduled-stop-575689: exit status 7 (60.292448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-575689" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-575689
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-575689: (1.604061534s)
--- PASS: TestScheduledStopUnix (94.12s)

                                                
                                    
x
+
TestSkaffold (97.78s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3888748005 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-591571 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-591571 --memory=2600 --driver=docker  --container-runtime=docker: (22.720228266s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3888748005 run --minikube-profile skaffold-591571 --kube-context skaffold-591571 --status-check=true --port-forward=false --interactive=false
E0930 11:02:06.108435   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3888748005 run --minikube-profile skaffold-591571 --kube-context skaffold-591571 --status-check=true --port-forward=false --interactive=false: (1m0.718430487s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-bb67685-s42rv" [15f812b5-5c26-4563-a6f7-2946084fcea4] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003540121s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7bcbc5c4f4-2r84j" [e3a0ff2d-5865-498f-ae6b-249ab212b134] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003380674s
helpers_test.go:175: Cleaning up "skaffold-591571" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-591571
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-591571: (2.714159637s)
--- PASS: TestSkaffold (97.78s)

                                                
                                    
x
+
TestInsufficientStorage (12.33s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-561757 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-561757 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.23608847s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4c1deb9a-1eb0-4d36-a1b4-60a6783895ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-561757] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"484bc077-a122-41d3-be43-757b4cd586f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19734"}}
	{"specversion":"1.0","id":"d55269e0-1eed-4c32-93aa-e58cdead00ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3dccf389-8049-400e-84f7-0ef8e6d4604d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19734-3685/kubeconfig"}}
	{"specversion":"1.0","id":"0fcc31ea-e7bf-43cd-94f1-def14a188b44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3685/.minikube"}}
	{"specversion":"1.0","id":"dd108f4b-7d80-4e0e-96f7-68e5e4e84a25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e8654a6b-5227-4322-907e-1a8116a4c6bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d125ef7c-b72f-4244-ab2e-e11603fd3446","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8a23cea3-5e3a-4bbe-aa49-c7a156f8102a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"713e23c0-4a24-49fa-be0d-1a49ad08f28d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"97c1d6d1-2440-435b-9df7-5ad5149c71c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d7b0033c-b4a9-44ff-a1f5-9ce22c60b73c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-561757\" primary control-plane node in \"insufficient-storage-561757\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8179537c-4660-4ed9-b3c7-c76a6f4fc426","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727108449-19696 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8e9045d7-8501-4365-8cf8-c3369ed02314","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"77091155-9e5a-4416-a052-eff3b5659095","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-561757 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-561757 --output=json --layout=cluster: exit status 7 (237.615661ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-561757","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-561757","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 11:03:26.825261  272524 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-561757" does not appear in /home/jenkins/minikube-integration/19734-3685/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-561757 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-561757 --output=json --layout=cluster: exit status 7 (241.126346ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-561757","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-561757","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 11:03:27.066986  272639 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-561757" does not appear in /home/jenkins/minikube-integration/19734-3685/kubeconfig
	E0930 11:03:27.076635  272639 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/insufficient-storage-561757/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-561757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-561757
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-561757: (1.612416566s)
--- PASS: TestInsufficientStorage (12.33s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (74.92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4071282680 start -p running-upgrade-575516 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4071282680 start -p running-upgrade-575516 --memory=2200 --vm-driver=docker  --container-runtime=docker: (32.375107238s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-575516 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-575516 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (39.91863297s)
helpers_test.go:175: Cleaning up "running-upgrade-575516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-575516
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-575516: (2.051704161s)
--- PASS: TestRunningBinaryUpgrade (74.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (344.29s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-287288 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-287288 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (41.135050446s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-287288
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-287288: (10.638096927s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-287288 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-287288 status --format={{.Host}}: exit status 7 (74.544476ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-287288 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-287288 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m28.146761539s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-287288 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-287288 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-287288 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (88.910421ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-287288] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-3685/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3685/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-287288
	    minikube start -p kubernetes-upgrade-287288 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2872882 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-287288 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-287288 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0930 11:10:46.491891   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/skaffold-591571/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-287288 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (21.830668832s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-287288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-287288
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-287288: (2.308320509s)
--- PASS: TestKubernetesUpgrade (344.29s)

                                                
                                    
x
+
TestMissingContainerUpgrade (151.49s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.239236871 start -p missing-upgrade-829706 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.239236871 start -p missing-upgrade-829706 --memory=2200 --driver=docker  --container-runtime=docker: (1m13.662275692s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-829706
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-829706: (12.345373258s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-829706
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-829706 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-829706 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m1.075861682s)
helpers_test.go:175: Cleaning up "missing-upgrade-829706" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-829706
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-829706: (3.767988749s)
--- PASS: TestMissingContainerUpgrade (151.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-801954 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-801954 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (63.954003ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-801954] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-3685/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3685/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (30.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-801954 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-801954 --driver=docker  --container-runtime=docker: (29.785619271s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-801954 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (30.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (115.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3088411577 start -p stopped-upgrade-811504 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3088411577 start -p stopped-upgrade-811504 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m14.862216783s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3088411577 -p stopped-upgrade-811504 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3088411577 -p stopped-upgrade-811504 stop: (11.478799103s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-811504 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-811504 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.082825505s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (115.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-801954 --no-kubernetes --driver=docker  --container-runtime=docker
E0930 11:04:01.561010   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-801954 --no-kubernetes --driver=docker  --container-runtime=docker: (14.040091877s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-801954 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-801954 status -o json: exit status 2 (346.424799ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-801954","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-801954
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-801954: (1.805431283s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (13.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-801954 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-801954 --no-kubernetes --driver=docker  --container-runtime=docker: (13.633138286s)
--- PASS: TestNoKubernetes/serial/Start (13.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-801954 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-801954 "sudo systemctl is-active --quiet service kubelet": exit status 1 (258.489608ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-801954
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-801954: (1.172545809s)
--- PASS: TestNoKubernetes/serial/Stop (1.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-801954 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-801954 --driver=docker  --container-runtime=docker: (6.718781139s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-801954 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-801954 "sudo systemctl is-active --quiet service kubelet": exit status 1 (240.953212ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-811504
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-811504: (1.480390995s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.48s)

                                                
                                    
x
+
TestPause/serial/Start (73.49s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-219290 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0930 11:07:06.108241   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-219290 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m13.488678053s)
--- PASS: TestPause/serial/Start (73.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (34.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-383110 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-383110 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (34.721612035s)
--- PASS: TestNetworkPlugins/group/auto/Start (34.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-383110 "pgrep -a kubelet"
I0930 11:07:43.267463   10447 config.go:182] Loaded profile config "auto-383110": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-383110 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rk7gf" [604d69db-504b-4321-b0e8-25c160932f89] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rk7gf" [604d69db-504b-4321-b0e8-25c160932f89] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003402925s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.18s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (34.22s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-219290 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-219290 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.201777772s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (34.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-383110 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-383110 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-383110 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (35.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-383110 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0930 11:08:12.884416   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/skaffold-591571/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-383110 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (35.990513101s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (35.99s)

                                                
                                    
x
+
TestPause/serial/Pause (0.5s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-219290 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.50s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-219290 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-219290 --output=json --layout=cluster: exit status 2 (269.359496ms)

                                                
                                                
-- stdout --
	{"Name":"pause-219290","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-219290","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.43s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-219290 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.43s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.6s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-219290 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.60s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.14s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-219290 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-219290 --alsologtostderr -v=5: (2.138246582s)
--- PASS: TestPause/serial/DeletePaused (2.14s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (16.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0930 11:08:23.126481   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/skaffold-591571/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (16.295785209s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-219290
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-219290: exit status 1 (17.928146ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-219290: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (16.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (32.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-383110 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0930 11:08:43.608230   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/skaffold-591571/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-383110 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (32.980130131s)
--- PASS: TestNetworkPlugins/group/calico/Start (32.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-5b66j" [93642f93-cfb2-472a-ad38-2dd50703d8dd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004931906s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-383110 "pgrep -a kubelet"
I0930 11:08:52.823124   10447 config.go:182] Loaded profile config "kindnet-383110": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-383110 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cwk6n" [3700ddc1-a481-454e-a082-5679f885ba04] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-cwk6n" [3700ddc1-a481-454e-a082-5679f885ba04] Running
E0930 11:09:01.560720   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003747624s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (24.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-383110 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context kindnet-383110 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.209571719s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0930 11:09:18.213824   10447 retry.go:31] will retry after 605.487879ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context kindnet-383110 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context kindnet-383110 exec deployment/netcat -- nslookup kubernetes.default: exit status 137 (7.355704474s)

                                                
                                                
** stderr ** 
	command terminated with exit code 137

                                                
                                                
** /stderr **
I0930 11:09:26.175273   10447 retry.go:31] will retry after 1.641929018s: exit status 137
net_test.go:175: (dbg) Run:  kubectl --context kindnet-383110 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (24.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (20.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-qvtxz" [a88d774c-fe90-4fe1-b320-acd61e69eca8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:344: "calico-node-qvtxz" [a88d774c-fe90-4fe1-b320-acd61e69eca8] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:344: "calico-node-qvtxz" [a88d774c-fe90-4fe1-b320-acd61e69eca8] Pending / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E0930 11:09:24.570169   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/skaffold-591571/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "calico-node-qvtxz" [a88d774c-fe90-4fe1-b320-acd61e69eca8] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 20.00408364s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (20.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (46.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-383110 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-383110 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (46.211865891s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (46.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-383110 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-383110 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-383110 "pgrep -a kubelet"
I0930 11:09:31.613722   10447 config.go:182] Loaded profile config "calico-383110": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-383110 replace --force -f testdata/netcat-deployment.yaml
I0930 11:09:32.160901   10447 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0930 11:09:32.175389   10447 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-twxtq" [e2da2585-1958-4313-8c66-bc7e4d7e8416] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-twxtq" [e2da2585-1958-4313-8c66-bc7e4d7e8416] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003954865s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-383110 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-383110 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-383110 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (40.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-383110 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-383110 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (40.751533149s)
--- PASS: TestNetworkPlugins/group/false/Start (40.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (69.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-383110 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-383110 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m9.430954891s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (69.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-383110 "pgrep -a kubelet"
I0930 11:10:12.497476   10447 config.go:182] Loaded profile config "custom-flannel-383110": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-383110 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4ktxs" [7e0a2622-0233-4b11-a54d-75f441822bc4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4ktxs" [7e0a2622-0233-4b11-a54d-75f441822bc4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004165128s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-383110 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-383110 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-383110 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-383110 "pgrep -a kubelet"
I0930 11:10:27.378523   10447 config.go:182] Loaded profile config "false-383110": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-383110 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vs48q" [70e04a64-95c8-4134-bf9e-df9b11eaf807] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vs48q" [70e04a64-95c8-4134-bf9e-df9b11eaf807] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.003969709s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-383110 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-383110 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-383110 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (47.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-383110 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-383110 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (47.789726075s)
--- PASS: TestNetworkPlugins/group/flannel/Start (47.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (64.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-383110 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-383110 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m4.865285976s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (64.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (42.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-383110 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-383110 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (42.938617971s)
--- PASS: TestNetworkPlugins/group/bridge/Start (42.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-383110 "pgrep -a kubelet"
I0930 11:11:12.438965   10447 config.go:182] Loaded profile config "enable-default-cni-383110": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-383110 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4gnpt" [dfd3bd84-22d1-4fc4-848b-1c60498d7ebd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4gnpt" [dfd3bd84-22d1-4fc4-848b-1c60498d7ebd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003531196s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-383110 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-383110 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-383110 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7d4q7" [aa041102-0d23-4805-a056-70d72eba33e9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004534806s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-383110 "pgrep -a kubelet"
I0930 11:11:35.688507   10447 config.go:182] Loaded profile config "flannel-383110": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-383110 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-t2qtg" [fb4e01ae-5aec-4a4b-9163-05902d73b66d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-t2qtg" [fb4e01ae-5aec-4a4b-9163-05902d73b66d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004677315s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (153.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-224297 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-224297 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m33.818977859s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (153.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-383110 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-383110 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-383110 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-383110 "pgrep -a kubelet"
I0930 11:11:49.779783   10447 config.go:182] Loaded profile config "bridge-383110": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-383110 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-szrzn" [6ff14c4e-90c0-4c46-8a49-01a25f685028] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-szrzn" [6ff14c4e-90c0-4c46-8a49-01a25f685028] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.006141596s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-383110 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-383110 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-383110 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-383110 "pgrep -a kubelet"
I0930 11:12:01.463934   10447 config.go:182] Loaded profile config "kubenet-383110": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-383110 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9tfl8" [b4f206e8-4f67-4f3d-81ed-121eee3c9580] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9tfl8" [b4f206e8-4f67-4f3d-81ed-121eee3c9580] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 8.004292495s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (8.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-768023 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0930 11:12:06.108588   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-768023 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m9.981001769s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-383110 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-383110 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-383110 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)
E0930 11:16:39.593241   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:49.511812   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/false-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:49.835494   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:49.935096   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/bridge-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:49.941485   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/bridge-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:49.952876   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/bridge-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:49.974272   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/bridge-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:50.015701   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/bridge-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:50.097138   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/bridge-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:50.259424   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/bridge-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:50.581263   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/bridge-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:51.223331   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/bridge-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:52.505357   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/bridge-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:53.711597   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/enable-default-cni-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:55.067257   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/bridge-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:55.196111   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/calico-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:00.188487   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/bridge-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:01.643510   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kubenet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:01.649886   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kubenet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:01.661207   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kubenet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:01.682623   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kubenet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:01.723981   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kubenet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:01.805386   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kubenet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:01.967036   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kubenet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:02.288464   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kubenet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:02.930091   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kubenet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:04.212430   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kubenet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:06.107939   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:06.774498   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kubenet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:10.317297   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:10.430761   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/bridge-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:11.896313   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kubenet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:22.138667   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kubenet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:30.912474   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/bridge-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:34.673551   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/enable-default-cni-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:42.620096   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kubenet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:43.440197   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/auto-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:51.278771   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:17:56.559398   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/custom-flannel-383110/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (65.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-620634 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-620634 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m5.529286184s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (65.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (65.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-314019 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0930 11:12:43.440720   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/auto-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:12:43.447106   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/auto-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:12:43.458546   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/auto-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:12:43.479960   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/auto-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:12:43.522253   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/auto-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:12:43.604060   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/auto-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:12:43.765571   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/auto-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:12:44.087297   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/auto-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:12:44.729313   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/auto-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:12:46.010552   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/auto-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:12:48.572544   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/auto-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:12:53.694382   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/auto-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:02.631190   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/skaffold-591571/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:03.935870   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/auto-383110/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-314019 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m5.797527136s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (65.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-768023 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5bdf62f8-0772-4252-aeda-9d04f8fd96e3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5bdf62f8-0772-4252-aeda-9d04f8fd96e3] Running
E0930 11:13:24.417148   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/auto-383110/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00369152s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-768023 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-768023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-768023 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-768023 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-768023 --alsologtostderr -v=3: (10.781618908s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-620634 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [130cd5a9-aba3-4289-ad39-1f13efce0ad2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0930 11:13:30.333902   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/skaffold-591571/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [130cd5a9-aba3-4289-ad39-1f13efce0ad2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.00340006s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-620634 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-620634 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-620634 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-768023 -n no-preload-768023
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-768023 -n no-preload-768023: exit status 7 (129.920427ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-768023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (263.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-768023 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-768023 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m22.834151362s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-768023 -n no-preload-768023
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (263.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-620634 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-620634 --alsologtostderr -v=3: (10.834620011s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-314019 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3a924447-2172-4f12-9334-19cc2a489e81] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3a924447-2172-4f12-9334-19cc2a489e81] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003948916s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-314019 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-314019 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-314019 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-314019 --alsologtostderr -v=3
E0930 11:13:46.547096   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kindnet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:46.553480   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kindnet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:46.564869   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kindnet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:46.586249   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kindnet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:46.627638   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kindnet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:46.709031   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kindnet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:46.870961   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kindnet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:47.192646   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kindnet-383110/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-314019 --alsologtostderr -v=3: (10.728327805s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-620634 -n embed-certs-620634
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-620634 -n embed-certs-620634: exit status 7 (133.725169ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-620634 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (263.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-620634 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0930 11:13:47.834126   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kindnet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:49.115944   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kindnet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:51.677414   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kindnet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:56.799570   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kindnet-383110/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-620634 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m23.131684461s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-620634 -n embed-certs-620634
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (263.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-314019 -n default-k8s-diff-port-314019
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-314019 -n default-k8s-diff-port-314019: exit status 7 (168.859407ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-314019 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-314019 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0930 11:14:01.560843   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/addons-485025/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:14:05.378397   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/auto-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:14:07.041559   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kindnet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:14:11.334061   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/calico-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:14:11.340361   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/calico-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:14:11.351745   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/calico-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:14:11.373111   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/calico-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:14:11.414484   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/calico-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:14:11.496518   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/calico-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:14:11.658751   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/calico-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:14:11.980444   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/calico-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:14:12.622452   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/calico-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:14:13.903829   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/calico-383110/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-314019 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m22.578213405s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-314019 -n default-k8s-diff-port-314019
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-224297 create -f testdata/busybox.yaml
E0930 11:14:16.465860   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/calico-383110/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2c342327-8b40-44cf-a9d7-143437cbcb09] Pending
helpers_test.go:344: "busybox" [2c342327-8b40-44cf-a9d7-143437cbcb09] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2c342327-8b40-44cf-a9d7-143437cbcb09] Running
E0930 11:14:21.587711   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/calico-383110/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003071964s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-224297 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-224297 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-224297 describe deploy/metrics-server -n kube-system
E0930 11:14:27.522987   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kindnet-383110/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-224297 --alsologtostderr -v=3
E0930 11:14:31.829876   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/calico-383110/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-224297 --alsologtostderr -v=3: (10.861033684s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-224297 -n old-k8s-version-224297
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-224297 -n old-k8s-version-224297: exit status 7 (59.321593ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-224297 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (23.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-224297 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0930 11:14:52.312179   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/calico-383110/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-224297 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (23.376736038s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-224297 -n old-k8s-version-224297
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (23.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (28.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0930 11:15:08.484989   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kindnet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:09.173921   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/functional-479649/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:12.698138   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/custom-flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:12.704555   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/custom-flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:12.715909   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/custom-flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:12.737460   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/custom-flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:12.778856   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/custom-flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:12.860258   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/custom-flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:13.021751   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/custom-flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:13.343325   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/custom-flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:13.985407   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/custom-flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:15.267060   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/custom-flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:17.829273   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/custom-flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-cd95d586-j4kcw" [7d3af0ca-1a40-454b-a73d-013b6846276d] Pending
helpers_test.go:344: "kubernetes-dashboard-cd95d586-j4kcw" [7d3af0ca-1a40-454b-a73d-013b6846276d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0930 11:15:22.951545   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/custom-flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-cd95d586-j4kcw" [7d3af0ca-1a40-454b-a73d-013b6846276d] Running
E0930 11:15:27.300558   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/auto-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:27.573067   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/false-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:27.579443   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/false-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:27.590836   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/false-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:27.612208   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/false-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:27.653649   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/false-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:27.735039   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/false-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:27.896534   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/false-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:28.218264   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/false-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:28.859816   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/false-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:30.142149   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/false-383110/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 28.004009882s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (28.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-j4kcw" [7d3af0ca-1a40-454b-a73d-013b6846276d] Running
E0930 11:15:32.704060   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/false-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:33.193962   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/custom-flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:33.274342   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/calico-383110/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004462954s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-224297 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-224297 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-224297 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-224297 -n old-k8s-version-224297
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-224297 -n old-k8s-version-224297: exit status 2 (282.01051ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-224297 -n old-k8s-version-224297
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-224297 -n old-k8s-version-224297: exit status 2 (274.823596ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-224297 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-224297 -n old-k8s-version-224297
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-224297 -n old-k8s-version-224297
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (27.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-416572 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0930 11:15:48.068306   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/false-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:53.675493   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/custom-flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-416572 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (27.075557792s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (27.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-416572 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-416572 --alsologtostderr -v=3
E0930 11:16:08.549910   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/false-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:12.734509   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/enable-default-cni-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:12.740927   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/enable-default-cni-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:12.752311   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/enable-default-cni-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:12.773694   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/enable-default-cni-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:12.815079   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/enable-default-cni-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:12.896533   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/enable-default-cni-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:13.058039   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/enable-default-cni-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:13.379807   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/enable-default-cni-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:14.021611   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/enable-default-cni-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:15.303726   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/enable-default-cni-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:17.865562   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/enable-default-cni-383110/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-416572 --alsologtostderr -v=3: (9.961230605s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-416572 -n newest-cni-416572
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-416572 -n newest-cni-416572: exit status 7 (61.505524ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-416572 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-416572 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0930 11:16:22.987353   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/enable-default-cni-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:29.338874   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:29.345326   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:29.356721   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:29.378241   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:29.419664   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:29.501078   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:29.663200   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:29.985381   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:30.406849   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kindnet-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:30.627455   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:31.909570   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-416572 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (13.714322104s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-416572 -n newest-cni-416572
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-416572 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-416572 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-416572 -n newest-cni-416572
E0930 11:16:33.229448   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/enable-default-cni-383110/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-416572 -n newest-cni-416572: exit status 2 (274.791247ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-416572 -n newest-cni-416572
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-416572 -n newest-cni-416572: exit status 2 (277.778933ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-416572 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-416572 -n newest-cni-416572
E0930 11:16:34.471729   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:34.637240   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/custom-flannel-383110/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-416572 -n newest-cni-416572
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-w2q7l" [1132a8db-6934-4bc4-b866-7c70580b144d] Running
E0930 11:18:02.631273   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/skaffold-591571/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004478328s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-w2q7l" [1132a8db-6934-4bc4-b866-7c70580b144d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003838793s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-768023 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-768023 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-768023 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-768023 -n no-preload-768023
E0930 11:18:11.434000   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/false-383110/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-768023 -n no-preload-768023: exit status 2 (269.1023ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-768023 -n no-preload-768023
E0930 11:18:11.874409   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/bridge-383110/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-768023 -n no-preload-768023: exit status 2 (276.067677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-768023 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-768023 -n no-preload-768023
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-768023 -n no-preload-768023
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2kfd5" [e03c1c6c-f314-4f1d-b658-b7d09fa85b1c] Running
E0930 11:18:11.142717   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/auto-383110/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003782427s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2kfd5" [e03c1c6c-f314-4f1d-b658-b7d09fa85b1c] Running
E0930 11:18:18.025486   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/no-preload-768023/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004094293s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-620634 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wcd85" [6d338947-60aa-40e4-8d85-2f654111a308] Running
E0930 11:18:20.587794   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/no-preload-768023/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004578673s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-620634 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-620634 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-620634 -n embed-certs-620634
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-620634 -n embed-certs-620634: exit status 2 (275.743052ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-620634 -n embed-certs-620634
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-620634 -n embed-certs-620634: exit status 2 (276.628724ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-620634 --alsologtostderr -v=1
E0930 11:18:23.582026   10447 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/kubenet-383110/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-620634 -n embed-certs-620634
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-620634 -n embed-certs-620634
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wcd85" [6d338947-60aa-40e4-8d85-2f654111a308] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003893065s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-314019 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-314019 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-314019 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-314019 -n default-k8s-diff-port-314019
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-314019 -n default-k8s-diff-port-314019: exit status 2 (259.655743ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-314019 -n default-k8s-diff-port-314019
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-314019 -n default-k8s-diff-port-314019: exit status 2 (262.672289ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-314019 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-314019 -n default-k8s-diff-port-314019
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-314019 -n default-k8s-diff-port-314019
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.22s)

                                                
                                    

Test skip (20/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-383110 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-383110

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-383110

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-383110

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-383110

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-383110

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-383110

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-383110

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-383110

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-383110

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-383110

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-383110

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-383110" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-383110" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-383110" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-383110" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-383110" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-383110" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-383110" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-383110" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-383110

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-383110

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-383110" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-383110" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-383110

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-383110

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-383110" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-383110" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-383110" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-383110" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-383110" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19734-3685/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 30 Sep 2024 11:04:41 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-829706
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19734-3685/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 30 Sep 2024 11:04:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: offline-docker-778899
contexts:
- context:
cluster: missing-upgrade-829706
extensions:
- extension:
last-update: Mon, 30 Sep 2024 11:04:41 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-829706
name: missing-upgrade-829706
- context:
cluster: offline-docker-778899
extensions:
- extension:
last-update: Mon, 30 Sep 2024 11:04:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: offline-docker-778899
name: offline-docker-778899
current-context: missing-upgrade-829706
kind: Config
preferences: {}
users:
- name: missing-upgrade-829706
user:
client-certificate: /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/missing-upgrade-829706/client.crt
client-key: /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/missing-upgrade-829706/client.key
- name: offline-docker-778899
user:
client-certificate: /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/offline-docker-778899/client.crt
client-key: /home/jenkins/minikube-integration/19734-3685/.minikube/profiles/offline-docker-778899/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-383110

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-383110" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383110"

                                                
                                                
----------------------- debugLogs end: cilium-383110 [took: 3.733666589s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-383110" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-383110
--- SKIP: TestNetworkPlugins/group/cilium (3.88s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-399656" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-399656
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard