Test Report: Docker_Linux 19664

                    
                      b0eadc949d6b6708e1f550519f8385f72d7fe4f5:2024-09-19:36285
                    
                

Test fail (1/343)

Order failed test Duration
33 TestAddons/parallel/Registry 72.33
x
+
TestAddons/parallel/Registry (72.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.982822ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
I0919 18:51:04.575308   14476 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0919 18:51:04.575331   14476 kapi.go:107] duration metric: took 4.306263ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "registry-66c9cd494c-bxkct" [5daab8c5-d486-4f2e-a165-b7129bb49ef1] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002638766s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bbpkk" [073b4ea3-119e-40f8-9331-51fd7dfdf5bf] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003625675s
addons_test.go:342: (dbg) Run:  kubectl --context addons-807343 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-807343 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-807343 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.068370206s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-807343 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-807343 ip
2024/09/19 18:52:14 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-807343 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-807343
helpers_test.go:235: (dbg) docker inspect addons-807343:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aef97022a03bdbf5bfe734f68ce5e11b7736159d9c9501f6c9b2689006e8caa3",
	        "Created": "2024-09-19T18:39:21.083354509Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16549,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-19T18:39:21.204764889Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bb3bcbaabeeeadbf6b43ae7d1d07e504b3c8a94ec024df89bcb237eba4f5e9b3",
	        "ResolvConfPath": "/var/lib/docker/containers/aef97022a03bdbf5bfe734f68ce5e11b7736159d9c9501f6c9b2689006e8caa3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aef97022a03bdbf5bfe734f68ce5e11b7736159d9c9501f6c9b2689006e8caa3/hostname",
	        "HostsPath": "/var/lib/docker/containers/aef97022a03bdbf5bfe734f68ce5e11b7736159d9c9501f6c9b2689006e8caa3/hosts",
	        "LogPath": "/var/lib/docker/containers/aef97022a03bdbf5bfe734f68ce5e11b7736159d9c9501f6c9b2689006e8caa3/aef97022a03bdbf5bfe734f68ce5e11b7736159d9c9501f6c9b2689006e8caa3-json.log",
	        "Name": "/addons-807343",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-807343:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-807343",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/239012100d140c6e779bbaa14d8b200915571383e66f212bafcf5cdd11426f3e-init/diff:/var/lib/docker/overlay2/a747039cf8c6806beef023824f909e863f6f9c2668e5d190ac4e313f702c001e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/239012100d140c6e779bbaa14d8b200915571383e66f212bafcf5cdd11426f3e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/239012100d140c6e779bbaa14d8b200915571383e66f212bafcf5cdd11426f3e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/239012100d140c6e779bbaa14d8b200915571383e66f212bafcf5cdd11426f3e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-807343",
	                "Source": "/var/lib/docker/volumes/addons-807343/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-807343",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-807343",
	                "name.minikube.sigs.k8s.io": "addons-807343",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "893364c9432bbb73ed97baaf3c2546a6b86c2aa8734883a56cbcd5a406e8bc46",
	            "SandboxKey": "/var/run/docker/netns/893364c9432b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-807343": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2a3f14ee82ebab40d332e259f56e05ea2ce6e3875077d28be72472d8bcb46737",
	                    "EndpointID": "c757973548812db48ce264fa61f5ca1271f4a59b91b82f2828499cc056c04e70",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-807343",
	                        "aef97022a03b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-807343 -n addons-807343
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-807343 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-260378                                                                   | download-docker-260378 | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC | 19 Sep 24 18:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-546000   | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC |                     |
	|         | binary-mirror-546000                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33185                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-546000                                                                     | binary-mirror-546000   | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC | 19 Sep 24 18:38 UTC |
	| addons  | disable dashboard -p                                                                        | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC |                     |
	|         | addons-807343                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC |                     |
	|         | addons-807343                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-807343 --wait=true                                                                | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC | 19 Sep 24 18:42 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-807343 addons disable                                                                | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:42 UTC | 19 Sep 24 18:43 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-807343 addons disable                                                                | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-807343 ssh cat                                                                       | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | /opt/local-path-provisioner/pvc-ac5b37a8-6b22-43fd-8e57-431a7ab03924_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-807343 addons disable                                                                | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | addons-807343                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | -p addons-807343                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-807343 addons disable                                                                | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-807343 addons                                                                        | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-807343 addons                                                                        | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-807343 addons                                                                        | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-807343 addons disable                                                                | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | -p addons-807343                                                                            |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:52 UTC |
	|         | addons-807343                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-807343 ssh curl -s                                                                   | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-807343 ip                                                                            | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	| addons  | addons-807343 addons disable                                                                | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-807343 addons disable                                                                | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| ip      | addons-807343 ip                                                                            | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	| addons  | addons-807343 addons disable                                                                | addons-807343          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:38:57
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:38:57.769495   15785 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:38:57.769590   15785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:38:57.769598   15785 out.go:358] Setting ErrFile to fd 2...
	I0919 18:38:57.769603   15785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:38:57.769759   15785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7708/.minikube/bin
	I0919 18:38:57.770270   15785 out.go:352] Setting JSON to false
	I0919 18:38:57.771052   15785 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1280,"bootTime":1726769858,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 18:38:57.771156   15785 start.go:139] virtualization: kvm guest
	I0919 18:38:57.773048   15785 out.go:177] * [addons-807343] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 18:38:57.774079   15785 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 18:38:57.774083   15785 notify.go:220] Checking for updates...
	I0919 18:38:57.775989   15785 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:38:57.777123   15785 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7708/kubeconfig
	I0919 18:38:57.778176   15785 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7708/.minikube
	I0919 18:38:57.779187   15785 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 18:38:57.780208   15785 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 18:38:57.781428   15785 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:38:57.801472   15785 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 18:38:57.801539   15785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:38:57.843807   15785 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-19 18:38:57.835623197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 18:38:57.843957   15785 docker.go:318] overlay module found
	I0919 18:38:57.845564   15785 out.go:177] * Using the docker driver based on user configuration
	I0919 18:38:57.846478   15785 start.go:297] selected driver: docker
	I0919 18:38:57.846490   15785 start.go:901] validating driver "docker" against <nil>
	I0919 18:38:57.846503   15785 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 18:38:57.847471   15785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:38:57.889589   15785 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-19 18:38:57.881813913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 18:38:57.889780   15785 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:38:57.889993   15785 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:38:57.891441   15785 out.go:177] * Using Docker driver with root privileges
	I0919 18:38:57.892507   15785 cni.go:84] Creating CNI manager for ""
	I0919 18:38:57.892557   15785 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 18:38:57.892567   15785 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 18:38:57.892616   15785 start.go:340] cluster config:
	{Name:addons-807343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-807343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:38:57.893673   15785 out.go:177] * Starting "addons-807343" primary control-plane node in "addons-807343" cluster
	I0919 18:38:57.894637   15785 cache.go:121] Beginning downloading kic base image for docker with docker
	I0919 18:38:57.895627   15785 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0919 18:38:57.896558   15785 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 18:38:57.896588   15785 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-7708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0919 18:38:57.896597   15785 cache.go:56] Caching tarball of preloaded images
	I0919 18:38:57.896645   15785 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0919 18:38:57.896679   15785 preload.go:172] Found /home/jenkins/minikube-integration/19664-7708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 18:38:57.896689   15785 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0919 18:38:57.897025   15785 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/config.json ...
	I0919 18:38:57.897048   15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/config.json: {Name:mkbc202ab93ac6c9af3368c03dc9b7ef5c44a6a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:38:57.910756   15785 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0919 18:38:57.910832   15785 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0919 18:38:57.910844   15785 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0919 18:38:57.910848   15785 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0919 18:38:57.910854   15785 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0919 18:38:57.910861   15785 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0919 18:39:09.564058   15785 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0919 18:39:09.564091   15785 cache.go:194] Successfully downloaded all kic artifacts
	I0919 18:39:09.564125   15785 start.go:360] acquireMachinesLock for addons-807343: {Name:mk65a2ec792cea9016395641b31b3f3ce57d8e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:39:09.564205   15785 start.go:364] duration metric: took 63.392µs to acquireMachinesLock for "addons-807343"
	I0919 18:39:09.564224   15785 start.go:93] Provisioning new machine with config: &{Name:addons-807343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-807343 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 18:39:09.564288   15785 start.go:125] createHost starting for "" (driver="docker")
	I0919 18:39:09.565738   15785 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0919 18:39:09.565992   15785 start.go:159] libmachine.API.Create for "addons-807343" (driver="docker")
	I0919 18:39:09.566023   15785 client.go:168] LocalClient.Create starting
	I0919 18:39:09.566116   15785 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19664-7708/.minikube/certs/ca.pem
	I0919 18:39:09.652101   15785 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19664-7708/.minikube/certs/cert.pem
	I0919 18:39:09.947918   15785 cli_runner.go:164] Run: docker network inspect addons-807343 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 18:39:09.963293   15785 cli_runner.go:211] docker network inspect addons-807343 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 18:39:09.963350   15785 network_create.go:284] running [docker network inspect addons-807343] to gather additional debugging logs...
	I0919 18:39:09.963366   15785 cli_runner.go:164] Run: docker network inspect addons-807343
	W0919 18:39:09.977079   15785 cli_runner.go:211] docker network inspect addons-807343 returned with exit code 1
	I0919 18:39:09.977100   15785 network_create.go:287] error running [docker network inspect addons-807343]: docker network inspect addons-807343: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-807343 not found
	I0919 18:39:09.977118   15785 network_create.go:289] output of [docker network inspect addons-807343]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-807343 not found
	
	** /stderr **
	I0919 18:39:09.977206   15785 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 18:39:09.991120   15785 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001adaa40}
	I0919 18:39:09.991158   15785 network_create.go:124] attempt to create docker network addons-807343 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 18:39:09.991195   15785 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-807343 addons-807343
	I0919 18:39:10.044425   15785 network_create.go:108] docker network addons-807343 192.168.49.0/24 created
	I0919 18:39:10.044455   15785 kic.go:121] calculated static IP "192.168.49.2" for the "addons-807343" container
	I0919 18:39:10.044514   15785 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 18:39:10.057376   15785 cli_runner.go:164] Run: docker volume create addons-807343 --label name.minikube.sigs.k8s.io=addons-807343 --label created_by.minikube.sigs.k8s.io=true
	I0919 18:39:10.072446   15785 oci.go:103] Successfully created a docker volume addons-807343
	I0919 18:39:10.072519   15785 cli_runner.go:164] Run: docker run --rm --name addons-807343-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-807343 --entrypoint /usr/bin/test -v addons-807343:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0919 18:39:17.212365   15785 cli_runner.go:217] Completed: docker run --rm --name addons-807343-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-807343 --entrypoint /usr/bin/test -v addons-807343:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (7.139790141s)
	I0919 18:39:17.212392   15785 oci.go:107] Successfully prepared a docker volume addons-807343
	I0919 18:39:17.212414   15785 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 18:39:17.212435   15785 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 18:39:17.212496   15785 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19664-7708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-807343:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 18:39:21.026329   15785 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19664-7708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-807343:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (3.813800374s)
	I0919 18:39:21.026361   15785 kic.go:203] duration metric: took 3.813924362s to extract preloaded images to volume ...
	W0919 18:39:21.026469   15785 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0919 18:39:21.026550   15785 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 18:39:21.069551   15785 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-807343 --name addons-807343 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-807343 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-807343 --network addons-807343 --ip 192.168.49.2 --volume addons-807343:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0919 18:39:21.360007   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Running}}
	I0919 18:39:21.377856   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:21.395214   15785 cli_runner.go:164] Run: docker exec addons-807343 stat /var/lib/dpkg/alternatives/iptables
	I0919 18:39:21.436072   15785 oci.go:144] the created container "addons-807343" has a running status.
	I0919 18:39:21.436111   15785 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa...
	I0919 18:39:21.742892   15785 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 18:39:21.761623   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:21.778849   15785 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 18:39:21.778869   15785 kic_runner.go:114] Args: [docker exec --privileged addons-807343 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 18:39:21.825918   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:21.843680   15785 machine.go:93] provisionDockerMachine start ...
	I0919 18:39:21.843771   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:21.859898   15785 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:21.860112   15785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0919 18:39:21.860126   15785 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 18:39:21.998104   15785 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-807343
	
	I0919 18:39:21.998128   15785 ubuntu.go:169] provisioning hostname "addons-807343"
	I0919 18:39:21.998187   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:22.015227   15785 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:22.015473   15785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0919 18:39:22.015492   15785 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-807343 && echo "addons-807343" | sudo tee /etc/hostname
	I0919 18:39:22.159919   15785 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-807343
	
	I0919 18:39:22.159986   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:22.176807   15785 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:22.177000   15785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0919 18:39:22.177019   15785 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-807343' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-807343/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-807343' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 18:39:22.302391   15785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 18:39:22.302414   15785 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19664-7708/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-7708/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-7708/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-7708/.minikube}
	I0919 18:39:22.302428   15785 ubuntu.go:177] setting up certificates
	I0919 18:39:22.302439   15785 provision.go:84] configureAuth start
	I0919 18:39:22.302489   15785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-807343
	I0919 18:39:22.317859   15785 provision.go:143] copyHostCerts
	I0919 18:39:22.317919   15785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7708/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-7708/.minikube/ca.pem (1078 bytes)
	I0919 18:39:22.318016   15785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7708/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-7708/.minikube/cert.pem (1123 bytes)
	I0919 18:39:22.318073   15785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7708/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-7708/.minikube/key.pem (1675 bytes)
	I0919 18:39:22.318122   15785 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-7708/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-7708/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-7708/.minikube/certs/ca-key.pem org=jenkins.addons-807343 san=[127.0.0.1 192.168.49.2 addons-807343 localhost minikube]
	I0919 18:39:22.454290   15785 provision.go:177] copyRemoteCerts
	I0919 18:39:22.454341   15785 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 18:39:22.454389   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:22.470144   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:22.562481   15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 18:39:22.582322   15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 18:39:22.601621   15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 18:39:22.620424   15785 provision.go:87] duration metric: took 317.975428ms to configureAuth
	I0919 18:39:22.620443   15785 ubuntu.go:193] setting minikube options for container-runtime
	I0919 18:39:22.620613   15785 config.go:182] Loaded profile config "addons-807343": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 18:39:22.620656   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:22.636292   15785 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:22.636473   15785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0919 18:39:22.636488   15785 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 18:39:22.762837   15785 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 18:39:22.762862   15785 ubuntu.go:71] root file system type: overlay
	I0919 18:39:22.763303   15785 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 18:39:22.763406   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:22.779566   15785 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:22.779769   15785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0919 18:39:22.779849   15785 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 18:39:22.916013   15785 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 18:39:22.916084   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:22.931898   15785 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:22.932052   15785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0919 18:39:22.932068   15785 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 18:39:23.571247   15785 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:41.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-19 18:39:22.907379771 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 18:39:23.571276   15785 machine.go:96] duration metric: took 1.727574924s to provisionDockerMachine
	I0919 18:39:23.571288   15785 client.go:171] duration metric: took 14.005257278s to LocalClient.Create
	I0919 18:39:23.571306   15785 start.go:167] duration metric: took 14.005314967s to libmachine.API.Create "addons-807343"
	I0919 18:39:23.571315   15785 start.go:293] postStartSetup for "addons-807343" (driver="docker")
	I0919 18:39:23.571327   15785 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 18:39:23.571391   15785 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 18:39:23.571436   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:23.587126   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:23.682960   15785 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 18:39:23.685630   15785 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 18:39:23.685664   15785 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 18:39:23.685676   15785 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 18:39:23.685685   15785 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 18:39:23.685699   15785 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7708/.minikube/addons for local assets ...
	I0919 18:39:23.685759   15785 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7708/.minikube/files for local assets ...
	I0919 18:39:23.685789   15785 start.go:296] duration metric: took 114.468091ms for postStartSetup
	I0919 18:39:23.686049   15785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-807343
	I0919 18:39:23.702153   15785 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/config.json ...
	I0919 18:39:23.702397   15785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 18:39:23.702444   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:23.717641   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:23.811378   15785 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 18:39:23.815165   15785 start.go:128] duration metric: took 14.250864255s to createHost
	I0919 18:39:23.815189   15785 start.go:83] releasing machines lock for "addons-807343", held for 14.250973949s
	I0919 18:39:23.815253   15785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-807343
	I0919 18:39:23.830836   15785 ssh_runner.go:195] Run: cat /version.json
	I0919 18:39:23.830865   15785 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 18:39:23.830881   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:23.830926   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:23.846376   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:23.846756   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:23.934289   15785 ssh_runner.go:195] Run: systemctl --version
	I0919 18:39:24.004699   15785 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 18:39:24.008795   15785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 18:39:24.030163   15785 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 18:39:24.030234   15785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 18:39:24.053740   15785 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 18:39:24.053763   15785 start.go:495] detecting cgroup driver to use...
	I0919 18:39:24.053792   15785 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0919 18:39:24.053884   15785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 18:39:24.067394   15785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0919 18:39:24.075704   15785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 18:39:24.084089   15785 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0919 18:39:24.084137   15785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0919 18:39:24.092527   15785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 18:39:24.100742   15785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 18:39:24.108907   15785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 18:39:24.117093   15785 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 18:39:24.124700   15785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 18:39:24.132811   15785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 18:39:24.140816   15785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 18:39:24.149218   15785 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 18:39:24.156295   15785 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 18:39:24.163264   15785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:24.237676   15785 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 18:39:24.309554   15785 start.go:495] detecting cgroup driver to use...
	I0919 18:39:24.309610   15785 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0919 18:39:24.309659   15785 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 18:39:24.321516   15785 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0919 18:39:24.321585   15785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 18:39:24.333943   15785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 18:39:24.348994   15785 ssh_runner.go:195] Run: which cri-dockerd
	I0919 18:39:24.352406   15785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 18:39:24.360693   15785 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0919 18:39:24.376094   15785 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 18:39:24.468983   15785 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 18:39:24.544209   15785 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0919 18:39:24.544355   15785 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0919 18:39:24.569328   15785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:24.647033   15785 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 18:39:24.883161   15785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 18:39:24.893297   15785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 18:39:24.903040   15785 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 18:39:24.977219   15785 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 18:39:25.058059   15785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:25.130328   15785 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 18:39:25.141773   15785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 18:39:25.151266   15785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:25.221331   15785 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 18:39:25.276597   15785 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 18:39:25.276675   15785 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 18:39:25.279901   15785 start.go:563] Will wait 60s for crictl version
	I0919 18:39:25.279947   15785 ssh_runner.go:195] Run: which crictl
	I0919 18:39:25.283042   15785 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 18:39:25.312857   15785 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0919 18:39:25.312919   15785 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 18:39:25.333521   15785 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 18:39:25.356400   15785 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0919 18:39:25.356474   15785 cli_runner.go:164] Run: docker network inspect addons-807343 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 18:39:25.371020   15785 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 18:39:25.374105   15785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:39:25.383284   15785 kubeadm.go:883] updating cluster {Name:addons-807343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-807343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 18:39:25.383394   15785 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 18:39:25.383451   15785 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 18:39:25.400716   15785 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 18:39:25.400735   15785 docker.go:615] Images already preloaded, skipping extraction
	I0919 18:39:25.400784   15785 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 18:39:25.417050   15785 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 18:39:25.417072   15785 cache_images.go:84] Images are preloaded, skipping loading
	I0919 18:39:25.417081   15785 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0919 18:39:25.417174   15785 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-807343 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-807343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 18:39:25.417231   15785 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 18:39:25.459623   15785 cni.go:84] Creating CNI manager for ""
	I0919 18:39:25.459648   15785 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 18:39:25.459659   15785 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 18:39:25.459681   15785 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-807343 NodeName:addons-807343 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 18:39:25.459865   15785 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-807343"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 18:39:25.459927   15785 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 18:39:25.467537   15785 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 18:39:25.467595   15785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 18:39:25.474698   15785 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 18:39:25.489321   15785 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 18:39:25.503568   15785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0919 18:39:25.517665   15785 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0919 18:39:25.520416   15785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:39:25.528989   15785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:25.610615   15785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:39:25.621892   15785 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343 for IP: 192.168.49.2
	I0919 18:39:25.621907   15785 certs.go:194] generating shared ca certs ...
	I0919 18:39:25.621920   15785 certs.go:226] acquiring lock for ca certs: {Name:mk9b3af41122a34a592ac6eeed2c52def55bc0f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:25.622030   15785 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7708/.minikube/ca.key
	I0919 18:39:25.811749   15785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7708/.minikube/ca.crt ...
	I0919 18:39:25.811779   15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/ca.crt: {Name:mk88a94bc694ddec2dfbbbabbcd781f123ddd9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:25.811946   15785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7708/.minikube/ca.key ...
	I0919 18:39:25.811958   15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/ca.key: {Name:mkc66a8180eb661e285aadfb26501f8024a68350 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:25.812040   15785 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7708/.minikube/proxy-client-ca.key
	I0919 18:39:25.936604   15785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7708/.minikube/proxy-client-ca.crt ...
	I0919 18:39:25.936633   15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/proxy-client-ca.crt: {Name:mk59da6c00fef0ea3e57ded18a5a446ce8386b19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:25.936794   15785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7708/.minikube/proxy-client-ca.key ...
	I0919 18:39:25.936805   15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/proxy-client-ca.key: {Name:mk97706d2bef6bf588fc277ec34770368952dd51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:25.936882   15785 certs.go:256] generating profile certs ...
	I0919 18:39:25.936937   15785 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.key
	I0919 18:39:25.936952   15785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt with IP's: []
	I0919 18:39:26.130481   15785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt ...
	I0919 18:39:26.130509   15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: {Name:mk2af64e4bfa59ecca1ebc34fd4b54f302b8c9e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:26.130673   15785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.key ...
	I0919 18:39:26.130685   15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.key: {Name:mk7cc9b760b355caaa9de5b438ced6df5b29b8bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:26.130758   15785 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.key.69611757
	I0919 18:39:26.130779   15785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.crt.69611757 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0919 18:39:26.291801   15785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.crt.69611757 ...
	I0919 18:39:26.291831   15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.crt.69611757: {Name:mkda54c7662691b7a5519485a4d5ca155d3460c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:26.291988   15785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.key.69611757 ...
	I0919 18:39:26.292002   15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.key.69611757: {Name:mk26c5a88a08c0ce5c993493b938c9d87c643a00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:26.292076   15785 certs.go:381] copying /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.crt.69611757 -> /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.crt
	I0919 18:39:26.292155   15785 certs.go:385] copying /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.key.69611757 -> /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.key
	I0919 18:39:26.292209   15785 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/proxy-client.key
	I0919 18:39:26.292235   15785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/proxy-client.crt with IP's: []
	I0919 18:39:26.349251   15785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/proxy-client.crt ...
	I0919 18:39:26.349291   15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/proxy-client.crt: {Name:mke88b5dc76835a1e2d726f450c48f436b0d7d83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:26.349469   15785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/proxy-client.key ...
	I0919 18:39:26.349482   15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/proxy-client.key: {Name:mk091ff61abd42dff135c6e85dbd56e53e007fe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:26.349673   15785 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7708/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 18:39:26.349706   15785 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7708/.minikube/certs/ca.pem (1078 bytes)
	I0919 18:39:26.349728   15785 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7708/.minikube/certs/cert.pem (1123 bytes)
	I0919 18:39:26.349752   15785 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7708/.minikube/certs/key.pem (1675 bytes)
	I0919 18:39:26.350300   15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 18:39:26.370865   15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 18:39:26.390309   15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 18:39:26.409440   15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 18:39:26.428290   15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 18:39:26.447263   15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 18:39:26.466154   15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 18:39:26.485240   15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 18:39:26.504162   15785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7708/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 18:39:26.523192   15785 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 18:39:26.537647   15785 ssh_runner.go:195] Run: openssl version
	I0919 18:39:26.542410   15785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 18:39:26.550042   15785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:39:26.552907   15785 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:39:26.552955   15785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:39:26.558581   15785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 18:39:26.566026   15785 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 18:39:26.568809   15785 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 18:39:26.568857   15785 kubeadm.go:392] StartCluster: {Name:addons-807343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-807343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:39:26.568948   15785 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 18:39:26.584613   15785 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 18:39:26.591832   15785 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 18:39:26.599095   15785 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 18:39:26.599141   15785 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 18:39:26.606021   15785 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 18:39:26.606037   15785 kubeadm.go:157] found existing configuration files:
	
	I0919 18:39:26.606064   15785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 18:39:26.613144   15785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 18:39:26.613187   15785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 18:39:26.619993   15785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 18:39:26.626610   15785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 18:39:26.626649   15785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 18:39:26.633254   15785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 18:39:26.640138   15785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 18:39:26.640171   15785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 18:39:26.646789   15785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 18:39:26.653622   15785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 18:39:26.653655   15785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 18:39:26.660309   15785 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 18:39:26.693756   15785 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0919 18:39:26.693823   15785 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 18:39:26.711253   15785 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 18:39:26.711532   15785 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0919 18:39:26.711593   15785 kubeadm.go:310] OS: Linux
	I0919 18:39:26.711657   15785 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 18:39:26.711730   15785 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0919 18:39:26.711800   15785 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 18:39:26.711867   15785 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 18:39:26.711937   15785 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 18:39:26.712005   15785 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 18:39:26.712065   15785 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 18:39:26.712115   15785 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 18:39:26.712169   15785 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0919 18:39:26.758966   15785 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 18:39:26.759126   15785 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 18:39:26.759237   15785 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 18:39:26.768259   15785 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 18:39:26.770830   15785 out.go:235]   - Generating certificates and keys ...
	I0919 18:39:26.770936   15785 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 18:39:26.771037   15785 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 18:39:26.962502   15785 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 18:39:27.179993   15785 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 18:39:27.270096   15785 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 18:39:27.429393   15785 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 18:39:27.634812   15785 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 18:39:27.634924   15785 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-807343 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 18:39:27.877591   15785 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 18:39:27.877710   15785 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-807343 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 18:39:27.927256   15785 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 18:39:28.094484   15785 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 18:39:28.312183   15785 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 18:39:28.312289   15785 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 18:39:28.446828   15785 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 18:39:28.520358   15785 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 18:39:28.786269   15785 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 18:39:28.840547   15785 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 18:39:28.938641   15785 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 18:39:28.939165   15785 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 18:39:28.941424   15785 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 18:39:28.944270   15785 out.go:235]   - Booting up control plane ...
	I0919 18:39:28.944397   15785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 18:39:28.944486   15785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 18:39:28.944561   15785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 18:39:28.952798   15785 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 18:39:28.957522   15785 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 18:39:28.957590   15785 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 18:39:29.039800   15785 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 18:39:29.039916   15785 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 18:39:29.541218   15785 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.456511ms
	I0919 18:39:29.541296   15785 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0919 18:39:34.042383   15785 kubeadm.go:310] [api-check] The API server is healthy after 4.501228564s
	I0919 18:39:34.053763   15785 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 18:39:34.062315   15785 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 18:39:34.076171   15785 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 18:39:34.076333   15785 kubeadm.go:310] [mark-control-plane] Marking the node addons-807343 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 18:39:34.082777   15785 kubeadm.go:310] [bootstrap-token] Using token: jppcbr.ipmzwmexwii5boyd
	I0919 18:39:34.083888   15785 out.go:235]   - Configuring RBAC rules ...
	I0919 18:39:34.084024   15785 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 18:39:34.086696   15785 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 18:39:34.091730   15785 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 18:39:34.093714   15785 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 18:39:34.095697   15785 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 18:39:34.097532   15785 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 18:39:34.446856   15785 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 18:39:34.887992   15785 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 18:39:35.447727   15785 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 18:39:35.449363   15785 kubeadm.go:310] 
	I0919 18:39:35.449436   15785 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 18:39:35.449450   15785 kubeadm.go:310] 
	I0919 18:39:35.449529   15785 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 18:39:35.449537   15785 kubeadm.go:310] 
	I0919 18:39:35.449558   15785 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 18:39:35.449625   15785 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 18:39:35.449669   15785 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 18:39:35.449675   15785 kubeadm.go:310] 
	I0919 18:39:35.449727   15785 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 18:39:35.449734   15785 kubeadm.go:310] 
	I0919 18:39:35.449773   15785 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 18:39:35.449779   15785 kubeadm.go:310] 
	I0919 18:39:35.449822   15785 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 18:39:35.449890   15785 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 18:39:35.449952   15785 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 18:39:35.449958   15785 kubeadm.go:310] 
	I0919 18:39:35.450087   15785 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 18:39:35.450195   15785 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 18:39:35.450217   15785 kubeadm.go:310] 
	I0919 18:39:35.450329   15785 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jppcbr.ipmzwmexwii5boyd \
	I0919 18:39:35.450466   15785 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e0fcc53032e0b18914406382acdde1a617457fe4835684fffa9f8c03161aa32e \
	I0919 18:39:35.450503   15785 kubeadm.go:310] 	--control-plane 
	I0919 18:39:35.450519   15785 kubeadm.go:310] 
	I0919 18:39:35.450622   15785 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 18:39:35.450632   15785 kubeadm.go:310] 
	I0919 18:39:35.450742   15785 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jppcbr.ipmzwmexwii5boyd \
	I0919 18:39:35.450881   15785 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e0fcc53032e0b18914406382acdde1a617457fe4835684fffa9f8c03161aa32e 
	I0919 18:39:35.452782   15785 kubeadm.go:310] W0919 18:39:26.691424    1918 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:39:35.453072   15785 kubeadm.go:310] W0919 18:39:26.691966    1918 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:39:35.453270   15785 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0919 18:39:35.453363   15785 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 18:39:35.453386   15785 cni.go:84] Creating CNI manager for ""
	I0919 18:39:35.453404   15785 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 18:39:35.454864   15785 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 18:39:35.456073   15785 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 18:39:35.463941   15785 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0919 18:39:35.479046   15785 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 18:39:35.479126   15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:35.479178   15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-807343 minikube.k8s.io/updated_at=2024_09_19T18_39_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=addons-807343 minikube.k8s.io/primary=true
	I0919 18:39:35.569970   15785 ops.go:34] apiserver oom_adj: -16
	I0919 18:39:35.582355   15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:36.083176   15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:36.582986   15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:37.083332   15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:37.583026   15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:38.083116   15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:38.583366   15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:39.082554   15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:39.583201   15785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:39.645539   15785 kubeadm.go:1113] duration metric: took 4.166483325s to wait for elevateKubeSystemPrivileges
	I0919 18:39:39.645571   15785 kubeadm.go:394] duration metric: took 13.076716929s to StartCluster
	I0919 18:39:39.645590   15785 settings.go:142] acquiring lock: {Name:mk64b5a5d79680fb0b250d268808142029c49502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:39.645687   15785 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19664-7708/kubeconfig
	I0919 18:39:39.646010   15785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7708/kubeconfig: {Name:mk4b292ae80d4376ae5eb287b2c4e3e0d9b1ffde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:39.646175   15785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 18:39:39.646184   15785 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 18:39:39.646254   15785 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0919 18:39:39.646386   15785 addons.go:69] Setting yakd=true in profile "addons-807343"
	I0919 18:39:39.646401   15785 addons.go:69] Setting gcp-auth=true in profile "addons-807343"
	I0919 18:39:39.646410   15785 addons.go:234] Setting addon yakd=true in "addons-807343"
	I0919 18:39:39.646419   15785 addons.go:69] Setting inspektor-gadget=true in profile "addons-807343"
	I0919 18:39:39.646430   15785 mustload.go:65] Loading cluster: addons-807343
	I0919 18:39:39.646437   15785 addons.go:234] Setting addon inspektor-gadget=true in "addons-807343"
	I0919 18:39:39.646437   15785 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-807343"
	I0919 18:39:39.646437   15785 addons.go:69] Setting cloud-spanner=true in profile "addons-807343"
	I0919 18:39:39.646455   15785 addons.go:69] Setting volcano=true in profile "addons-807343"
	I0919 18:39:39.646465   15785 host.go:66] Checking if "addons-807343" exists ...
	I0919 18:39:39.646412   15785 addons.go:69] Setting ingress-dns=true in profile "addons-807343"
	I0919 18:39:39.646477   15785 addons.go:69] Setting storage-provisioner=true in profile "addons-807343"
	I0919 18:39:39.646479   15785 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-807343"
	I0919 18:39:39.646492   15785 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-807343"
	I0919 18:39:39.646504   15785 addons.go:69] Setting helm-tiller=true in profile "addons-807343"
	I0919 18:39:39.646517   15785 addons.go:234] Setting addon helm-tiller=true in "addons-807343"
	I0919 18:39:39.646537   15785 host.go:66] Checking if "addons-807343" exists ...
	I0919 18:39:39.646401   15785 addons.go:69] Setting registry=true in profile "addons-807343"
	I0919 18:39:39.646576   15785 addons.go:234] Setting addon registry=true in "addons-807343"
	I0919 18:39:39.646602   15785 host.go:66] Checking if "addons-807343" exists ...
	I0919 18:39:39.646602   15785 config.go:182] Loaded profile config "addons-807343": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 18:39:39.646465   15785 addons.go:69] Setting metrics-server=true in profile "addons-807343"
	I0919 18:39:39.646696   15785 addons.go:234] Setting addon metrics-server=true in "addons-807343"
	I0919 18:39:39.646718   15785 host.go:66] Checking if "addons-807343" exists ...
	I0919 18:39:39.646539   15785 host.go:66] Checking if "addons-807343" exists ...
	I0919 18:39:39.646415   15785 addons.go:69] Setting default-storageclass=true in profile "addons-807343"
	I0919 18:39:39.646846   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:39.646493   15785 addons.go:234] Setting addon storage-provisioner=true in "addons-807343"
	I0919 18:39:39.646945   15785 host.go:66] Checking if "addons-807343" exists ...
	I0919 18:39:39.646992   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:39.647021   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:39.647088   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:39.647163   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:39.647229   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:39.646844   15785 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-807343"
	I0919 18:39:39.647438   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:39.647669   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:39.646469   15785 addons.go:234] Setting addon volcano=true in "addons-807343"
	I0919 18:39:39.648086   15785 host.go:66] Checking if "addons-807343" exists ...
	I0919 18:39:39.648663   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:39.646388   15785 addons.go:69] Setting ingress=true in profile "addons-807343"
	I0919 18:39:39.648898   15785 addons.go:234] Setting addon ingress=true in "addons-807343"
	I0919 18:39:39.646468   15785 addons.go:234] Setting addon cloud-spanner=true in "addons-807343"
	I0919 18:39:39.649005   15785 host.go:66] Checking if "addons-807343" exists ...
	I0919 18:39:39.646445   15785 host.go:66] Checking if "addons-807343" exists ...
	I0919 18:39:39.649579   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:39.646495   15785 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-807343"
	I0919 18:39:39.649801   15785 host.go:66] Checking if "addons-807343" exists ...
	I0919 18:39:39.649908   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:39.646388   15785 config.go:182] Loaded profile config "addons-807343": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 18:39:39.646493   15785 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-807343"
	I0919 18:39:39.649997   15785 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-807343"
	I0919 18:39:39.646454   15785 addons.go:69] Setting volumesnapshots=true in profile "addons-807343"
	I0919 18:39:39.650137   15785 addons.go:234] Setting addon volumesnapshots=true in "addons-807343"
	I0919 18:39:39.650163   15785 host.go:66] Checking if "addons-807343" exists ...
	I0919 18:39:39.648960   15785 host.go:66] Checking if "addons-807343" exists ...
	I0919 18:39:39.650275   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:39.646483   15785 addons.go:234] Setting addon ingress-dns=true in "addons-807343"
	I0919 18:39:39.650558   15785 host.go:66] Checking if "addons-807343" exists ...
	I0919 18:39:39.648969   15785 out.go:177] * Verifying Kubernetes components...
	I0919 18:39:39.651972   15785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:39.672671   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:39.672787   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:39.672671   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:39.673219   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:39.685075   15785 host.go:66] Checking if "addons-807343" exists ...
	I0919 18:39:39.686451   15785 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0919 18:39:39.687978   15785 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 18:39:39.687999   15785 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 18:39:39.688172   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:39.690225   15785 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0919 18:39:39.691209   15785 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0919 18:39:39.692320   15785 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0919 18:39:39.693277   15785 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0919 18:39:39.695524   15785 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0919 18:39:39.698952   15785 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0919 18:39:39.699973   15785 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0919 18:39:39.701052   15785 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0919 18:39:39.701966   15785 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0919 18:39:39.701985   15785 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0919 18:39:39.702037   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:39.716700   15785 out.go:177]   - Using image docker.io/registry:2.8.3
	I0919 18:39:39.717888   15785 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0919 18:39:39.718925   15785 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0919 18:39:39.718945   15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0919 18:39:39.718996   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:39.719101   15785 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0919 18:39:39.720139   15785 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0919 18:39:39.720157   15785 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0919 18:39:39.720193   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:39.726091   15785 addons.go:234] Setting addon default-storageclass=true in "addons-807343"
	I0919 18:39:39.726127   15785 host.go:66] Checking if "addons-807343" exists ...
	I0919 18:39:39.726546   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:39.727468   15785 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0919 18:39:39.728838   15785 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0919 18:39:39.730129   15785 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0919 18:39:39.732571   15785 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0919 18:39:39.732612   15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0919 18:39:39.732677   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:39.734302   15785 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 18:39:39.734369   15785 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0919 18:39:39.735521   15785 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:39:39.735541   15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 18:39:39.735586   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:39.735849   15785 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0919 18:39:39.735863   15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0919 18:39:39.735907   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:39.742065   15785 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:39:39.743716   15785 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0919 18:39:39.743957   15785 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:39:39.744757   15785 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:39:39.744774   15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0919 18:39:39.744823   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:39.754042   15785 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0919 18:39:39.755361   15785 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:39:39.755380   15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0919 18:39:39.755429   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:39.757120   15785 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0919 18:39:39.758249   15785 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0919 18:39:39.758264   15785 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0919 18:39:39.758310   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:39.762910   15785 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0919 18:39:39.762934   15785 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0919 18:39:39.764070   15785 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0919 18:39:39.764088   15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0919 18:39:39.764136   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:39.764324   15785 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0919 18:39:39.764337   15785 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0919 18:39:39.764386   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:39.765848   15785 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0919 18:39:39.767312   15785 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:39:39.767328   15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0919 18:39:39.767376   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:39.772660   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:39.774038   15785 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-807343"
	I0919 18:39:39.774083   15785 host.go:66] Checking if "addons-807343" exists ...
	I0919 18:39:39.774554   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:39.791138   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:39.792630   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:39.793037   15785 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 18:39:39.793058   15785 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 18:39:39.793109   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:39.794316   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:39.795317   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:39.795734   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:39.812437   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:39.821141   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:39.821937   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:39.824156   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:39.827809   15785 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0919 18:39:39.830379   15785 out.go:177]   - Using image docker.io/busybox:stable
	I0919 18:39:39.831473   15785 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:39:39.831494   15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0919 18:39:39.831543   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:39.834474   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:39.835857   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:39.837358   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:39.838132   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:39.849755   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:39.871087   15785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 18:39:39.871195   15785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:39:40.186871   15785 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0919 18:39:40.186961   15785 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0919 18:39:40.368885   15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:39:40.370061   15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:39:40.379859   15785 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0919 18:39:40.379928   15785 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0919 18:39:40.385951   15785 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:39:40.386023   15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0919 18:39:40.392100   15785 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0919 18:39:40.392123   15785 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0919 18:39:40.482380   15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:39:40.486087   15785 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0919 18:39:40.486111   15785 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0919 18:39:40.568800   15785 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 18:39:40.568886   15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0919 18:39:40.570238   15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:39:40.570726   15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0919 18:39:40.576395   15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 18:39:40.669560   15785 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0919 18:39:40.669589   15785 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0919 18:39:40.670189   15785 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0919 18:39:40.670214   15785 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0919 18:39:40.673207   15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:39:40.673481   15785 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0919 18:39:40.673538   15785 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0919 18:39:40.685737   15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:39:40.687656   15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0919 18:39:40.779723   15785 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0919 18:39:40.779814   15785 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0919 18:39:40.782608   15785 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0919 18:39:40.782692   15785 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0919 18:39:40.785974   15785 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0919 18:39:40.786039   15785 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0919 18:39:40.874803   15785 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0919 18:39:40.874887   15785 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0919 18:39:40.969672   15785 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 18:39:40.969709   15785 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 18:39:40.981368   15785 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0919 18:39:40.981461   15785 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0919 18:39:40.985252   15785 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0919 18:39:40.985323   15785 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0919 18:39:41.187670   15785 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0919 18:39:41.187768   15785 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0919 18:39:41.267896   15785 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0919 18:39:41.267931   15785 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0919 18:39:41.280914   15785 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0919 18:39:41.280940   15785 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0919 18:39:41.483272   15785 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:39:41.483315   15785 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 18:39:41.571400   15785 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0919 18:39:41.571429   15785 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0919 18:39:41.688891   15785 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:39:41.688956   15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0919 18:39:41.770515   15785 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0919 18:39:41.770592   15785 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0919 18:39:41.870144   15785 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.998906916s)
	I0919 18:39:41.871107   15785 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.999989161s)
	I0919 18:39:41.871262   15785 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 18:39:41.872202   15785 node_ready.go:35] waiting up to 6m0s for node "addons-807343" to be "Ready" ...
	I0919 18:39:41.873755   15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0919 18:39:41.878692   15785 node_ready.go:49] node "addons-807343" has status "Ready":"True"
	I0919 18:39:41.878750   15785 node_ready.go:38] duration metric: took 6.488114ms for node "addons-807343" to be "Ready" ...
	I0919 18:39:41.878841   15785 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:39:41.889700   15785 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace to be "Ready" ...
	I0919 18:39:41.972397   15785 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0919 18:39:41.972479   15785 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0919 18:39:42.069920   15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:39:42.075240   15785 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0919 18:39:42.075317   15785 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0919 18:39:42.173829   15785 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:39:42.173916   15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0919 18:39:42.180217   15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:39:42.376149   15785 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-807343" context rescaled to 1 replicas
	I0919 18:39:42.390234   15785 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:39:42.390257   15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0919 18:39:42.569959   15785 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0919 18:39:42.569990   15785 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0919 18:39:42.682561   15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:39:43.188993   15785 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0919 18:39:43.189022   15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0919 18:39:43.291834   15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:39:43.770772   15785 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0919 18:39:43.770875   15785 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0919 18:39:43.980025   15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
	I0919 18:39:44.374980   15785 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0919 18:39:44.375006   15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0919 18:39:44.580609   15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.211613055s)
	I0919 18:39:45.068915   15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.698766039s)
	I0919 18:39:45.069261   15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.586844604s)
	I0919 18:39:45.169863   15785 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0919 18:39:45.169952   15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0919 18:39:45.879874   15785 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:39:45.879919   15785 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0919 18:39:45.988792   15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
	I0919 18:39:46.070160   15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:39:46.775488   15785 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0919 18:39:46.775662   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:46.799484   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:47.578618   15785 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0919 18:39:47.877471   15785 addons.go:234] Setting addon gcp-auth=true in "addons-807343"
	I0919 18:39:47.877534   15785 host.go:66] Checking if "addons-807343" exists ...
	I0919 18:39:47.878040   15785 cli_runner.go:164] Run: docker container inspect addons-807343 --format={{.State.Status}}
	I0919 18:39:47.901007   15785 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0919 18:39:47.901048   15785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-807343
	I0919 18:39:47.917809   15785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/addons-807343/id_rsa Username:docker}
	I0919 18:39:48.472584   15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
	I0919 18:39:50.971717   15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
	I0919 18:39:51.877162   15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.306371145s)
	I0919 18:39:51.877251   15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.306949991s)
	I0919 18:39:51.877292   15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.300857244s)
	I0919 18:39:51.877639   15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.204365694s)
	I0919 18:39:51.877658   15785 addons.go:475] Verifying addon ingress=true in "addons-807343"
	I0919 18:39:51.877829   15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.192014171s)
	I0919 18:39:51.877847   15785 addons.go:475] Verifying addon registry=true in "addons-807343"
	I0919 18:39:51.878290   15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.004474997s)
	I0919 18:39:51.878446   15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.808483441s)
	I0919 18:39:51.878636   15785 addons.go:475] Verifying addon metrics-server=true in "addons-807343"
	I0919 18:39:51.878506   15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.698206742s)
	I0919 18:39:51.878602   15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.195990945s)
	I0919 18:39:51.878734   15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.190302446s)
	I0919 18:39:51.878697   15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.586832565s)
	W0919 18:39:51.878768   15785 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:39:51.878790   15785 retry.go:31] will retry after 169.31069ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:39:51.879703   15785 out.go:177] * Verifying registry addon...
	I0919 18:39:51.879704   15785 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-807343 service yakd-dashboard -n yakd-dashboard
	
	I0919 18:39:51.879880   15785 out.go:177] * Verifying ingress addon...
	I0919 18:39:51.885615   15785 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0919 18:39:51.885616   15785 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0919 18:39:51.889715   15785 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0919 18:39:51.889736   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:52.048535   15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:39:52.068686   15785 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 18:39:52.068755   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:52.391441   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:52.391634   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:52.890170   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:52.891216   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:53.190665   15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.120439643s)
	I0919 18:39:53.190701   15785 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.28966662s)
	I0919 18:39:53.190706   15785 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-807343"
	I0919 18:39:53.192279   15785 out.go:177] * Verifying csi-hostpath-driver addon...
	I0919 18:39:53.192389   15785 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:39:53.194636   15785 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0919 18:39:53.196076   15785 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0919 18:39:53.197177   15785 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0919 18:39:53.197198   15785 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0919 18:39:53.199456   15785 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:39:53.199471   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:53.290404   15785 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0919 18:39:53.290428   15785 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0919 18:39:53.379776   15785 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:39:53.379809   15785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0919 18:39:53.469691   15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
	I0919 18:39:53.470717   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:53.472271   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:53.491578   15785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:39:53.769699   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:53.891606   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:53.891668   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:54.199301   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:54.390659   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:54.391264   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:54.580558   15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.531977835s)
	I0919 18:39:54.699628   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:54.889533   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:54.890682   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:54.968955   15785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.477330303s)
	I0919 18:39:54.970347   15785 addons.go:475] Verifying addon gcp-auth=true in "addons-807343"
	I0919 18:39:54.971906   15785 out.go:177] * Verifying gcp-auth addon...
	I0919 18:39:54.973873   15785 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0919 18:39:54.989005   15785 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 18:39:55.199575   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:55.390092   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:55.390546   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:55.699567   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:55.890692   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:55.891284   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:55.894135   15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
	I0919 18:39:56.199324   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:56.389725   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:56.389992   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:56.700164   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:56.889558   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:56.889813   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:57.199204   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:57.389401   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:57.389801   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:57.699512   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:57.889545   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:57.890534   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:57.894760   15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
	I0919 18:39:58.273726   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:58.389680   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:58.389839   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:58.699193   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:58.889408   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:58.889653   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:59.199178   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:59.389235   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:59.390135   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:59.699641   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:59.889652   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:59.890086   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:00.199600   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:00.389857   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:00.390211   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:00.394214   15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:00.698928   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:00.889710   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:00.890101   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:01.200176   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:01.389298   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:01.389606   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:01.699016   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:01.889132   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:01.889473   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:02.199184   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:02.389375   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:02.389771   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:02.394514   15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:02.699125   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:02.890049   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:02.890525   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:03.199626   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:03.388747   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:03.388967   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:03.699525   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:03.889759   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:03.890315   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:04.199815   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:04.389180   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:04.389658   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:04.699330   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:04.889963   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:04.890286   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:04.895112   15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:05.199392   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:05.389968   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:05.390368   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:05.699223   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:05.890393   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:05.890757   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:06.200106   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:06.391139   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:06.391845   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:06.698525   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:06.890016   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:06.890230   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:07.198979   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:07.389511   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:07.389852   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:07.393612   15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:07.699318   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:07.889173   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:07.890152   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:08.199616   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:08.389123   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:08.389534   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:08.699493   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:08.888933   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:08.889396   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:09.198817   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:09.391519   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:09.391963   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:09.395307   15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:09.699691   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:09.888775   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:09.889194   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:10.199293   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:10.389647   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:10.390049   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:10.698674   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:10.889437   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:10.889647   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:11.199499   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:11.389594   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:11.390432   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:11.699325   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:11.889523   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:11.889788   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:11.894236   15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:12.199321   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:12.389440   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:12.389720   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:12.699209   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:12.890270   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:12.890421   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:13.200175   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:13.390275   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:13.390863   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:13.699657   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:13.890252   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:13.890543   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:13.894747   15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:14.199118   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:14.389959   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:14.390430   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:14.699756   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:14.889721   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:14.889835   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:15.198522   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:15.388932   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:15.389359   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:15.699594   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:15.889793   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:15.890307   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:16.199345   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:16.389850   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:16.390361   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:16.393417   15785 pod_ready.go:103] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:16.699438   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:16.889814   15785 kapi.go:107] duration metric: took 25.004195756s to wait for kubernetes.io/minikube-addons=registry ...
	I0919 18:40:16.890295   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:17.199183   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:17.390022   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:17.395295   15785 pod_ready.go:93] pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:17.395368   15785 pod_ready.go:82] duration metric: took 35.505606197s for pod "coredns-7c65d6cfc9-cfl84" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:17.395386   15785 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j7z28" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:17.396928   15785 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-j7z28" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-j7z28" not found
	I0919 18:40:17.396946   15785 pod_ready.go:82] duration metric: took 1.554154ms for pod "coredns-7c65d6cfc9-j7z28" in "kube-system" namespace to be "Ready" ...
	E0919 18:40:17.396955   15785 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-j7z28" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-j7z28" not found
	I0919 18:40:17.396961   15785 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-807343" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:17.400896   15785 pod_ready.go:93] pod "etcd-addons-807343" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:17.400914   15785 pod_ready.go:82] duration metric: took 3.945864ms for pod "etcd-addons-807343" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:17.400924   15785 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-807343" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:17.404814   15785 pod_ready.go:93] pod "kube-apiserver-addons-807343" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:17.404835   15785 pod_ready.go:82] duration metric: took 3.902185ms for pod "kube-apiserver-addons-807343" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:17.404846   15785 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-807343" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:17.408587   15785 pod_ready.go:93] pod "kube-controller-manager-addons-807343" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:17.408603   15785 pod_ready.go:82] duration metric: took 3.750531ms for pod "kube-controller-manager-addons-807343" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:17.408612   15785 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ddktm" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:17.593370   15785 pod_ready.go:93] pod "kube-proxy-ddktm" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:17.593392   15785 pod_ready.go:82] duration metric: took 184.772891ms for pod "kube-proxy-ddktm" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:17.593403   15785 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-807343" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:17.699138   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:17.978072   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:17.992993   15785 pod_ready.go:93] pod "kube-scheduler-addons-807343" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:17.993017   15785 pod_ready.go:82] duration metric: took 399.606715ms for pod "kube-scheduler-addons-807343" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:17.993031   15785 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4rj76" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:18.199633   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:18.389659   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:18.392965   15785 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-4rj76" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:18.392988   15785 pod_ready.go:82] duration metric: took 399.948916ms for pod "nvidia-device-plugin-daemonset-4rj76" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:18.392998   15785 pod_ready.go:39] duration metric: took 36.514120678s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:40:18.393023   15785 api_server.go:52] waiting for apiserver process to appear ...
	I0919 18:40:18.393084   15785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:40:18.410793   15785 api_server.go:72] duration metric: took 38.764584516s to wait for apiserver process to appear ...
	I0919 18:40:18.410817   15785 api_server.go:88] waiting for apiserver healthz status ...
	I0919 18:40:18.410839   15785 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 18:40:18.415893   15785 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 18:40:18.416856   15785 api_server.go:141] control plane version: v1.31.1
	I0919 18:40:18.416881   15785 api_server.go:131] duration metric: took 6.056323ms to wait for apiserver health ...
	I0919 18:40:18.416890   15785 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 18:40:18.599874   15785 system_pods.go:59] 18 kube-system pods found
	I0919 18:40:18.599909   15785 system_pods.go:61] "coredns-7c65d6cfc9-cfl84" [eee626ef-868e-4ead-b5e6-9517454e5ff9] Running
	I0919 18:40:18.599921   15785 system_pods.go:61] "csi-hostpath-attacher-0" [7b2441ca-4042-46e2-807a-db381962ac05] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0919 18:40:18.599931   15785 system_pods.go:61] "csi-hostpath-resizer-0" [b8dcc11c-f567-48e9-ab17-75e5b0475393] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0919 18:40:18.599942   15785 system_pods.go:61] "csi-hostpathplugin-pzn4j" [3e4889e0-e027-4eca-a4da-302b8811e298] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0919 18:40:18.599949   15785 system_pods.go:61] "etcd-addons-807343" [408b91f6-57e0-4654-88c5-a8d6b550bac6] Running
	I0919 18:40:18.599956   15785 system_pods.go:61] "kube-apiserver-addons-807343" [d8bc5f24-a83f-493a-8d39-601bb010a9e5] Running
	I0919 18:40:18.599961   15785 system_pods.go:61] "kube-controller-manager-addons-807343" [3966284d-f6c5-45a2-8544-7e632f3ab601] Running
	I0919 18:40:18.600003   15785 system_pods.go:61] "kube-ingress-dns-minikube" [3d06f080-c6aa-4078-971c-fd8426586f6e] Running
	I0919 18:40:18.600012   15785 system_pods.go:61] "kube-proxy-ddktm" [f6ad1770-b609-4aff-8863-8912236980a1] Running
	I0919 18:40:18.600020   15785 system_pods.go:61] "kube-scheduler-addons-807343" [1bf6d1d5-3895-4bc0-a679-1c913857701c] Running
	I0919 18:40:18.600025   15785 system_pods.go:61] "metrics-server-84c5f94fbc-d74dx" [d90ed638-b34d-4a70-a846-898f37d3a262] Running
	I0919 18:40:18.600033   15785 system_pods.go:61] "nvidia-device-plugin-daemonset-4rj76" [0c3f2ba6-3e70-4d40-844b-605e747b7435] Running
	I0919 18:40:18.600042   15785 system_pods.go:61] "registry-66c9cd494c-bxkct" [5daab8c5-d486-4f2e-a165-b7129bb49ef1] Running
	I0919 18:40:18.600053   15785 system_pods.go:61] "registry-proxy-bbpkk" [073b4ea3-119e-40f8-9331-51fd7dfdf5bf] Running
	I0919 18:40:18.600066   15785 system_pods.go:61] "snapshot-controller-56fcc65765-6ptq4" [b4bdc5cf-660e-4290-820f-ebce887001c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 18:40:18.600082   15785 system_pods.go:61] "snapshot-controller-56fcc65765-b7vgm" [ff423e87-1fec-4d13-8aef-42c22620df00] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 18:40:18.600091   15785 system_pods.go:61] "storage-provisioner" [b72035c0-b232-4cda-9f88-42bf47f8ddc3] Running
	I0919 18:40:18.600101   15785 system_pods.go:61] "tiller-deploy-b48cc5f79-vmsvx" [3388a43f-3bd2-4f3a-8975-ecd10db08a16] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0919 18:40:18.600108   15785 system_pods.go:74] duration metric: took 183.213516ms to wait for pod list to return data ...
	I0919 18:40:18.600116   15785 default_sa.go:34] waiting for default service account to be created ...
	I0919 18:40:18.699903   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:18.793571   15785 default_sa.go:45] found service account: "default"
	I0919 18:40:18.793601   15785 default_sa.go:55] duration metric: took 193.477641ms for default service account to be created ...
	I0919 18:40:18.793613   15785 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 18:40:18.890364   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:19.083135   15785 system_pods.go:86] 18 kube-system pods found
	I0919 18:40:19.083163   15785 system_pods.go:89] "coredns-7c65d6cfc9-cfl84" [eee626ef-868e-4ead-b5e6-9517454e5ff9] Running
	I0919 18:40:19.083172   15785 system_pods.go:89] "csi-hostpath-attacher-0" [7b2441ca-4042-46e2-807a-db381962ac05] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0919 18:40:19.083178   15785 system_pods.go:89] "csi-hostpath-resizer-0" [b8dcc11c-f567-48e9-ab17-75e5b0475393] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0919 18:40:19.083191   15785 system_pods.go:89] "csi-hostpathplugin-pzn4j" [3e4889e0-e027-4eca-a4da-302b8811e298] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0919 18:40:19.083198   15785 system_pods.go:89] "etcd-addons-807343" [408b91f6-57e0-4654-88c5-a8d6b550bac6] Running
	I0919 18:40:19.083204   15785 system_pods.go:89] "kube-apiserver-addons-807343" [d8bc5f24-a83f-493a-8d39-601bb010a9e5] Running
	I0919 18:40:19.083212   15785 system_pods.go:89] "kube-controller-manager-addons-807343" [3966284d-f6c5-45a2-8544-7e632f3ab601] Running
	I0919 18:40:19.083221   15785 system_pods.go:89] "kube-ingress-dns-minikube" [3d06f080-c6aa-4078-971c-fd8426586f6e] Running
	I0919 18:40:19.083229   15785 system_pods.go:89] "kube-proxy-ddktm" [f6ad1770-b609-4aff-8863-8912236980a1] Running
	I0919 18:40:19.083234   15785 system_pods.go:89] "kube-scheduler-addons-807343" [1bf6d1d5-3895-4bc0-a679-1c913857701c] Running
	I0919 18:40:19.083240   15785 system_pods.go:89] "metrics-server-84c5f94fbc-d74dx" [d90ed638-b34d-4a70-a846-898f37d3a262] Running
	I0919 18:40:19.083245   15785 system_pods.go:89] "nvidia-device-plugin-daemonset-4rj76" [0c3f2ba6-3e70-4d40-844b-605e747b7435] Running
	I0919 18:40:19.083251   15785 system_pods.go:89] "registry-66c9cd494c-bxkct" [5daab8c5-d486-4f2e-a165-b7129bb49ef1] Running
	I0919 18:40:19.083254   15785 system_pods.go:89] "registry-proxy-bbpkk" [073b4ea3-119e-40f8-9331-51fd7dfdf5bf] Running
	I0919 18:40:19.083264   15785 system_pods.go:89] "snapshot-controller-56fcc65765-6ptq4" [b4bdc5cf-660e-4290-820f-ebce887001c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 18:40:19.083273   15785 system_pods.go:89] "snapshot-controller-56fcc65765-b7vgm" [ff423e87-1fec-4d13-8aef-42c22620df00] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 18:40:19.083280   15785 system_pods.go:89] "storage-provisioner" [b72035c0-b232-4cda-9f88-42bf47f8ddc3] Running
	I0919 18:40:19.083286   15785 system_pods.go:89] "tiller-deploy-b48cc5f79-vmsvx" [3388a43f-3bd2-4f3a-8975-ecd10db08a16] Running
	I0919 18:40:19.083297   15785 system_pods.go:126] duration metric: took 289.67668ms to wait for k8s-apps to be running ...
	I0919 18:40:19.083310   15785 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 18:40:19.083363   15785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:40:19.095871   15785 system_svc.go:56] duration metric: took 12.554192ms WaitForService to wait for kubelet
	I0919 18:40:19.095894   15785 kubeadm.go:582] duration metric: took 39.449692605s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:40:19.095909   15785 node_conditions.go:102] verifying NodePressure condition ...
	I0919 18:40:19.193533   15785 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 18:40:19.193572   15785 node_conditions.go:123] node cpu capacity is 8
	I0919 18:40:19.193584   15785 node_conditions.go:105] duration metric: took 97.669974ms to run NodePressure ...
	I0919 18:40:19.193605   15785 start.go:241] waiting for startup goroutines ...
	I0919 18:40:19.199329   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:19.389586   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:19.699865   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:19.889664   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:20.199325   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:20.389948   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:20.698535   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:20.888906   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:21.200209   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:21.389965   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:21.699168   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:21.889838   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:22.200065   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:22.389527   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:22.699501   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:22.893065   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:23.199343   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:23.390191   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:23.699260   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:23.890461   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:24.199477   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:24.389136   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:24.699221   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:24.889820   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:25.200138   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:25.389292   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:25.698417   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:25.890518   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:26.200657   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:26.389809   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:26.698799   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:26.889818   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:27.199794   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:27.389534   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:27.699533   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:27.900422   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:28.199254   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:28.390107   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:28.700422   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:28.889998   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:29.199662   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:29.390131   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:29.699239   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:29.890070   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:30.208176   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:30.390438   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:30.699509   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:30.890745   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:31.199683   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:31.390240   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:31.698927   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:31.889552   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:32.200440   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:32.389357   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:32.699606   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:32.890596   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:33.198837   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:33.389932   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:33.698946   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:33.889654   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:34.199377   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:34.390365   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:34.700286   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:34.890695   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:35.199638   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:35.389579   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:35.699572   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:35.913625   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:36.199281   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:36.389703   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:36.699318   15785 kapi.go:107] duration metric: took 43.504683414s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0919 18:40:36.889764   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:37.389801   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:37.889236   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:38.388685   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:38.889545   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:39.389995   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:39.889486   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:40.389215   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:40.889229   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:41.389634   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:41.889403   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:42.389505   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:42.889748   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:43.389834   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:43.890568   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:44.389091   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:44.888982   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:45.389760   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:45.889566   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:46.389473   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:46.889239   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:47.388895   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:47.889771   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:48.389873   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:48.888889   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:49.389565   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:49.889052   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:50.388648   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:50.890546   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:51.389642   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:51.889149   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:52.389532   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:52.889211   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:53.388991   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:53.890270   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:54.389964   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:54.890148   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:55.389721   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:55.889445   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:56.392833   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:56.890802   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:57.390778   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:57.890612   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:58.389837   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:58.889194   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:59.389994   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:59.889562   15785 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:00.389962   15785 kapi.go:107] duration metric: took 1m8.504343143s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0919 18:41:17.977430   15785 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 18:41:17.977450   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:18.477412   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:18.977559   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:19.478051   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:19.977250   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:20.477177   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:20.976908   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:21.477062   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:21.977056   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:22.477301   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:22.977118   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:23.477311   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:23.977103   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:24.476681   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:24.977005   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:25.476727   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:25.977583   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:26.477316   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:26.977430   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:27.477613   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:27.976511   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:28.477268   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:28.977032   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:29.477075   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:29.977036   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:30.476774   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:30.976753   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:31.477789   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:31.976192   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:32.476857   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:32.976599   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:33.477798   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:33.977761   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:34.476709   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:34.977406   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:35.477622   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:35.977431   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:36.476944   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:36.976545   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:37.477757   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:37.976828   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:38.476937   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:38.976871   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:39.477063   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:39.976573   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:40.477233   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:40.976832   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:41.477099   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:41.976341   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:42.477343   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:42.977175   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:43.477110   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:43.976897   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:44.476720   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:44.977444   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:45.477453   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:45.977362   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:46.476931   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:46.976537   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:47.477625   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:47.976762   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:48.477054   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:48.976831   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:49.476820   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:49.976955   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:50.476482   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:50.977204   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:51.477487   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:51.976985   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:52.476702   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:52.976600   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:53.477907   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:53.976807   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:54.476925   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:54.977194   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:55.476948   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:55.976643   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:56.477680   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:56.977260   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:57.477131   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:57.976828   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:58.477969   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:58.976589   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:59.477555   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:59.977385   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:00.476995   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:00.977144   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:01.477140   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:01.976972   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:02.476881   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:02.976631   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:03.478121   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:03.976817   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:04.476895   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:04.976589   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:05.478041   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:05.977540   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:06.477564   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:06.976300   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:07.477407   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:07.977542   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:08.477695   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:08.977438   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:09.477300   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:09.976883   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:10.476638   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:10.977649   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:11.477980   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:11.976497   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:12.477621   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:12.977408   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:13.477611   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:13.977664   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:14.477488   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:14.977149   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:15.477402   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:15.976976   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:16.476408   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:16.977365   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:17.477449   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:17.977882   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:18.476701   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:18.977448   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:19.478880   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:19.976617   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:20.477183   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:20.977195   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:21.477274   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:21.976674   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:22.477745   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:22.976405   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:23.477756   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:23.977333   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:24.477445   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:24.976693   15785 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:25.478145   15785 kapi.go:107] duration metric: took 2m30.504270195s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0919 18:42:25.479629   15785 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-807343 cluster.
	I0919 18:42:25.480807   15785 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0919 18:42:25.482012   15785 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0919 18:42:25.483313   15785 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, storage-provisioner-rancher, volcano, nvidia-device-plugin, helm-tiller, metrics-server, inspektor-gadget, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0919 18:42:25.484387   15785 addons.go:510] duration metric: took 2m45.83813789s for enable addons: enabled=[storage-provisioner ingress-dns storage-provisioner-rancher volcano nvidia-device-plugin helm-tiller metrics-server inspektor-gadget cloud-spanner yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0919 18:42:25.484420   15785 start.go:246] waiting for cluster config update ...
	I0919 18:42:25.484439   15785 start.go:255] writing updated cluster config ...
	I0919 18:42:25.484670   15785 ssh_runner.go:195] Run: rm -f paused
	I0919 18:42:25.530236   15785 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0919 18:42:25.532248   15785 out.go:177] * Done! kubectl is now configured to use "addons-807343" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 19 18:51:49 addons-807343 dockerd[1337]: time="2024-09-19T18:51:49.894152971Z" level=warning msg="failed to close stdin: NotFound: task 92d5192af51c5ab20eda8cee5705b369ffea8983b981b69536c34201e655ec22 not found: not found"
	Sep 19 18:51:51 addons-807343 dockerd[1337]: time="2024-09-19T18:51:51.693232557Z" level=info msg="ignoring event" container=7fc8608673ad2821f5bc4bb8de4fcdccfda3907b3d13ce6f1efbe2b1732b0c93 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:51:52 addons-807343 dockerd[1337]: time="2024-09-19T18:51:52.168199715Z" level=info msg="ignoring event" container=ce976de5bdf099212de44cd6465570c744e498e2e20fa8cb2ec263592333622c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:51:52 addons-807343 dockerd[1337]: time="2024-09-19T18:51:52.292990179Z" level=info msg="ignoring event" container=84719236e80e39e6e55593e3a319bb30d70b78156a98706802f5226ebc645465 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:51:54 addons-807343 dockerd[1337]: time="2024-09-19T18:51:54.760723631Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 19 18:51:54 addons-807343 dockerd[1337]: time="2024-09-19T18:51:54.762509046Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 19 18:51:54 addons-807343 dockerd[1337]: time="2024-09-19T18:51:54.937988436Z" level=info msg="ignoring event" container=7efccd0e37b6586224f49f3c317e5beb17f2ce43acb21314f952e7dd368993f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:51:55 addons-807343 dockerd[1337]: time="2024-09-19T18:51:55.071986891Z" level=info msg="ignoring event" container=f53fc94ea5ace8b46d80aea7fdfbc1aebe19650fdd8951c76b7dc16b3e7e9936 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:51:55 addons-807343 cri-dockerd[1601]: time="2024-09-19T18:51:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d1cbeb90e7864210126b5d957ebd18b0af6352c34e9be2e66c7f12c058b0998f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 19 18:51:57 addons-807343 cri-dockerd[1601]: time="2024-09-19T18:51:57Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Sep 19 18:51:58 addons-807343 dockerd[1337]: time="2024-09-19T18:51:58.612439525Z" level=info msg="ignoring event" container=4d4f25b9e340db719745c99c8df1d7e1958b6d5c646096a051f1bc16cd0e5d61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:04 addons-807343 cri-dockerd[1601]: time="2024-09-19T18:52:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8f9c10c881707e2a35f8c287dff2500f782268d81a760b1552f3741a61a52196/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 19 18:52:04 addons-807343 dockerd[1337]: time="2024-09-19T18:52:04.517546509Z" level=info msg="ignoring event" container=3d7e1ac6f85964c94afe9b6a85bca15e1ea4600b57e9926ebeaeb9c4c3329bcf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:04 addons-807343 dockerd[1337]: time="2024-09-19T18:52:04.561411153Z" level=info msg="ignoring event" container=cf40b486b65ecd03cb586caa798d24b72c59b2c1a7a5c4618fb38685e8f8c48f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:04 addons-807343 cri-dockerd[1601]: time="2024-09-19T18:52:04Z" level=info msg="Stop pulling image docker.io/kicbase/echo-server:1.0: Status: Downloaded newer image for kicbase/echo-server:1.0"
	Sep 19 18:52:05 addons-807343 cri-dockerd[1601]: time="2024-09-19T18:52:05Z" level=error msg="error getting RW layer size for container ID '3d7e1ac6f85964c94afe9b6a85bca15e1ea4600b57e9926ebeaeb9c4c3329bcf': Error response from daemon: No such container: 3d7e1ac6f85964c94afe9b6a85bca15e1ea4600b57e9926ebeaeb9c4c3329bcf"
	Sep 19 18:52:05 addons-807343 cri-dockerd[1601]: time="2024-09-19T18:52:05Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3d7e1ac6f85964c94afe9b6a85bca15e1ea4600b57e9926ebeaeb9c4c3329bcf'"
	Sep 19 18:52:08 addons-807343 dockerd[1337]: time="2024-09-19T18:52:08.491929987Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=10bff7ad8724f4a60f545ad0011f539a3a592c1cc18fc36c7ceb4521438903a8
	Sep 19 18:52:08 addons-807343 dockerd[1337]: time="2024-09-19T18:52:08.552400017Z" level=info msg="ignoring event" container=10bff7ad8724f4a60f545ad0011f539a3a592c1cc18fc36c7ceb4521438903a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:08 addons-807343 dockerd[1337]: time="2024-09-19T18:52:08.696776290Z" level=info msg="ignoring event" container=90f138a209b7e2928406d19e9b74cc6193ce03539da373a38e804468226c0636 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:14 addons-807343 dockerd[1337]: time="2024-09-19T18:52:14.800501317Z" level=info msg="ignoring event" container=1bcfbde63b203efc42d0134d6ffb9063f6fd592c7103bc01811f5f2c9c642d40 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:15 addons-807343 dockerd[1337]: time="2024-09-19T18:52:15.276584307Z" level=info msg="ignoring event" container=a12a2a40d4e0df140d828a994f75bb9f8e2ae30b36684e6fd3444f373786fa15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:15 addons-807343 dockerd[1337]: time="2024-09-19T18:52:15.326458758Z" level=info msg="ignoring event" container=d05dc574a112f35aae1273679b533b15af41e83fc6023984721ff06773c375d4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:15 addons-807343 dockerd[1337]: time="2024-09-19T18:52:15.416725796Z" level=info msg="ignoring event" container=30843bcbcdcad55806bc12716a5604065f2f013afbfaf79e9b37d926fabbc30e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:52:15 addons-807343 dockerd[1337]: time="2024-09-19T18:52:15.492027130Z" level=info msg="ignoring event" container=7c03493c3b3d7c4ad8fb9797a6914e376d92aa6894b41b72baf207d066838ceb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	374e9c462e5db       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  12 seconds ago      Running             hello-world-app           0                   8f9c10c881707       hello-world-app-55bf9c44b4-sxzx2
	217553a6fdbd1       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                19 seconds ago      Running             nginx                     0                   d1cbeb90e7864       nginx
	92d5192af51c5       alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                          27 seconds ago      Exited              helm-test                 0                   7fc8608673ad2       helm-test
	17e78c6f0cb4a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   568ae086eaca3       gcp-auth-89d5ffd79-qfxtn
	5c65d44069279       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   abc41a6e2fd5e       ingress-nginx-admission-patch-rbdzf
	0237edcc60e02       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   5cf9bd5ed4bb7       ingress-nginx-admission-create-zdffs
	d05dc574a112f       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy            0                   7c03493c3b3d7       registry-proxy-bbpkk
	a12a2a40d4e0d       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                             12 minutes ago      Exited              registry                  0                   30843bcbcdcad       registry-66c9cd494c-bxkct
	67149d9e3be24       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   de8102a6441b7       storage-provisioner
	88194834292fc       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                   0                   ad31f0c01049c       coredns-7c65d6cfc9-cfl84
	969a0f35b949e       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                0                   092bdac6bbb50       kube-proxy-ddktm
	32c83be9d6183       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   aeb138a7ea6d1       kube-controller-manager-addons-807343
	d399ae9b2f7d8       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver            0                   f3e9e572b23f2       kube-apiserver-addons-807343
	75e79c347dfc4       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler            0                   d92fba3fe8cca       kube-scheduler-addons-807343
	d6ed3b2e997db       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   bbb9f42c62d82       etcd-addons-807343
	
	
	==> coredns [88194834292f] <==
	Trace[918896613]: [30.000996595s] [30.000996595s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1655127070]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (19-Sep-2024 18:39:43.280) (total time: 30001ms):
	Trace[1655127070]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:40:13.281)
	Trace[1655127070]: [30.001220278s] [30.001220278s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	[INFO] Reloading complete
	[INFO] 127.0.0.1:46539 - 57371 "HINFO IN 2989181266431568175.3170300541988887209. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015860839s
	[INFO] 10.244.0.26:51184 - 33699 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000349323s
	[INFO] 10.244.0.26:60648 - 23627 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000437967s
	[INFO] 10.244.0.26:50186 - 36409 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000171923s
	[INFO] 10.244.0.26:49524 - 47586 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000218252s
	[INFO] 10.244.0.26:46207 - 47367 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000104984s
	[INFO] 10.244.0.26:44094 - 61553 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000163497s
	[INFO] 10.244.0.26:58751 - 11412 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.007490996s
	[INFO] 10.244.0.26:32809 - 60728 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.007976071s
	[INFO] 10.244.0.26:44893 - 25751 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007074952s
	[INFO] 10.244.0.26:42913 - 26308 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00726801s
	[INFO] 10.244.0.26:36300 - 60089 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005524268s
	[INFO] 10.244.0.26:38304 - 13061 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006228787s
	[INFO] 10.244.0.26:42889 - 5566 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000621578s
	[INFO] 10.244.0.26:54898 - 50128 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000777141s
	
	
	==> describe nodes <==
	Name:               addons-807343
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-807343
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=addons-807343
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T18_39_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-807343
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 18:39:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-807343
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 18:52:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 18:52:10 +0000   Thu, 19 Sep 2024 18:39:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 18:52:10 +0000   Thu, 19 Sep 2024 18:39:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 18:52:10 +0000   Thu, 19 Sep 2024 18:39:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 18:52:10 +0000   Thu, 19 Sep 2024 18:39:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-807343
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 22c49f1956b94547a3f39e5d27ac1425
	  System UUID:                4ffd36f8-513f-4a96-96f2-486a850e4563
	  Boot ID:                    2196c4a9-2227-4889-b22e-1ff833eab33f
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     hello-world-app-55bf9c44b4-sxzx2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  gcp-auth                    gcp-auth-89d5ffd79-qfxtn                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-cfl84                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-807343                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-807343             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-807343    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-ddktm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-807343             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-807343 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-807343 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-807343 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-807343 event: Registered Node addons-807343 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 8e cd 6f 69 24 08 06
	[  +1.312142] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 b2 de b3 ef fb 08 06
	[  +5.250752] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 f0 c0 b8 a9 31 08 06
	[  +0.638551] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 7b d2 18 97 ab 08 06
	[  +0.319330] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 ec 47 a4 0f 0d 08 06
	[ +20.972224] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 33 c8 f7 44 59 08 06
	[  +3.877385] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be 74 96 86 3a 2b 08 06
	[Sep19 18:41] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 f9 1b 77 83 e0 08 06
	[  +0.060405] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca a0 1b 31 c3 d4 08 06
	[Sep19 18:42] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 83 1f ba 10 b6 08 06
	[  +0.000481] IPv4: martian source 10.244.0.26 from 10.244.0.2, on dev eth0
	[  +0.000018] ll header: 00000000: ff ff ff ff ff ff 36 8b b9 4e 6d 88 08 06
	[Sep19 18:51] IPv4: martian source 10.244.0.1 from 10.244.0.36, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 99 cd b7 2e 48 08 06
	[Sep19 18:52] IPv4: martian source 10.244.0.37 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be 74 96 86 3a 2b 08 06
	
	
	==> etcd [d6ed3b2e997d] <==
	{"level":"info","ts":"2024-09-19T18:39:30.402231Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T18:39:30.403185Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-19T18:39:50.076986Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.137521ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-807343\" ","response":"range_response_count:1 size:4404"}
	{"level":"info","ts":"2024-09-19T18:39:50.077068Z","caller":"traceutil/trace.go:171","msg":"trace[16821754] range","detail":"{range_begin:/registry/minions/addons-807343; range_end:; response_count:1; response_revision:744; }","duration":"103.227171ms","start":"2024-09-19T18:39:49.973826Z","end":"2024-09-19T18:39:50.077053Z","steps":["trace[16821754] 'agreement among raft nodes before linearized reading'  (duration: 95.691891ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:50.077392Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.336507ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/snapshot-controller-56fcc65765\" ","response":"range_response_count:1 size:2108"}
	{"level":"info","ts":"2024-09-19T18:39:50.077430Z","caller":"traceutil/trace.go:171","msg":"trace[1823657821] range","detail":"{range_begin:/registry/replicasets/kube-system/snapshot-controller-56fcc65765; range_end:; response_count:1; response_revision:745; }","duration":"103.375802ms","start":"2024-09-19T18:39:49.974042Z","end":"2024-09-19T18:39:50.077418Z","steps":["trace[1823657821] 'agreement among raft nodes before linearized reading'  (duration: 103.280269ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:50.667358Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.994633ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032005939789616 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/volcano-system/volcano-controllers.17f6b93fb3b23040\" mod_revision:0 > success:<request_put:<key:\"/registry/events/volcano-system/volcano-controllers.17f6b93fb3b23040\" value_size:651 lease:8128032005939788591 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-19T18:39:50.667600Z","caller":"traceutil/trace.go:171","msg":"trace[1402537900] transaction","detail":"{read_only:false; response_revision:777; number_of_response:1; }","duration":"196.580902ms","start":"2024-09-19T18:39:50.471006Z","end":"2024-09-19T18:39:50.667587Z","steps":["trace[1402537900] 'process raft request'  (duration: 196.529126ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:50.667789Z","caller":"traceutil/trace.go:171","msg":"trace[496157971] transaction","detail":"{read_only:false; response_revision:775; number_of_response:1; }","duration":"197.734182ms","start":"2024-09-19T18:39:50.470045Z","end":"2024-09-19T18:39:50.667779Z","steps":["trace[496157971] 'process raft request'  (duration: 25.740904ms)","trace[496157971] 'compare'  (duration: 102.89098ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:39:50.667893Z","caller":"traceutil/trace.go:171","msg":"trace[2115824162] linearizableReadLoop","detail":"{readStateIndex:790; appliedIndex:789; }","duration":"197.34399ms","start":"2024-09-19T18:39:50.470542Z","end":"2024-09-19T18:39:50.667886Z","steps":["trace[2115824162] 'read index received'  (duration: 25.238293ms)","trace[2115824162] 'applied index is now lower than readState.Index'  (duration: 172.104628ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:39:50.667998Z","caller":"traceutil/trace.go:171","msg":"trace[1686226436] transaction","detail":"{read_only:false; response_revision:776; number_of_response:1; }","duration":"197.202394ms","start":"2024-09-19T18:39:50.470789Z","end":"2024-09-19T18:39:50.667991Z","steps":["trace[1686226436] 'process raft request'  (duration: 196.661665ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:50.668337Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.500082ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/coredns-7c65d6cfc9-cfl84.17f6b93ef9b1a18d\" ","response":"range_response_count:1 size:787"}
	{"level":"info","ts":"2024-09-19T18:39:50.668375Z","caller":"traceutil/trace.go:171","msg":"trace[333774830] range","detail":"{range_begin:/registry/events/kube-system/coredns-7c65d6cfc9-cfl84.17f6b93ef9b1a18d; range_end:; response_count:1; response_revision:777; }","duration":"195.55941ms","start":"2024-09-19T18:39:50.472804Z","end":"2024-09-19T18:39:50.668363Z","steps":["trace[333774830] 'agreement among raft nodes before linearized reading'  (duration: 195.433353ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:50.668578Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.029526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/volcano-system\" ","response":"range_response_count:1 size:664"}
	{"level":"info","ts":"2024-09-19T18:39:50.668602Z","caller":"traceutil/trace.go:171","msg":"trace[381116697] range","detail":"{range_begin:/registry/namespaces/volcano-system; range_end:; response_count:1; response_revision:777; }","duration":"198.055458ms","start":"2024-09-19T18:39:50.470539Z","end":"2024-09-19T18:39:50.668594Z","steps":["trace[381116697] 'agreement among raft nodes before linearized reading'  (duration: 197.974633ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:50.668747Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.341749ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission\" ","response":"range_response_count:1 size:979"}
	{"level":"info","ts":"2024-09-19T18:39:50.668774Z","caller":"traceutil/trace.go:171","msg":"trace[130722087] range","detail":"{range_begin:/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission; range_end:; response_count:1; response_revision:777; }","duration":"188.370893ms","start":"2024-09-19T18:39:50.480395Z","end":"2024-09-19T18:39:50.668766Z","steps":["trace[130722087] 'agreement among raft nodes before linearized reading'  (duration: 188.295183ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:58.071358Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.964887ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032005939790079 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-controller-bc57996ff-svlpf.17f6b93face8d59d\" mod_revision:922 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-bc57996ff-svlpf.17f6b93face8d59d\" value_size:731 lease:8128032005939788591 >> failure:<request_range:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-bc57996ff-svlpf.17f6b93face8d59d\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-19T18:39:58.071445Z","caller":"traceutil/trace.go:171","msg":"trace[1960524016] linearizableReadLoop","detail":"{readStateIndex:988; appliedIndex:987; }","duration":"149.322364ms","start":"2024-09-19T18:39:57.922112Z","end":"2024-09-19T18:39:58.071434Z","steps":["trace[1960524016] 'read index received'  (duration: 39.089852ms)","trace[1960524016] 'applied index is now lower than readState.Index'  (duration: 110.231519ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:39:58.071478Z","caller":"traceutil/trace.go:171","msg":"trace[1242200740] transaction","detail":"{read_only:false; response_revision:971; number_of_response:1; }","duration":"149.743114ms","start":"2024-09-19T18:39:57.921717Z","end":"2024-09-19T18:39:58.071460Z","steps":["trace[1242200740] 'process raft request'  (duration: 39.494631ms)","trace[1242200740] 'compare'  (duration: 109.83951ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-19T18:39:58.071603Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.480294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gcp-auth/gcp-auth-certs-patch.17f6b940c056e1c9\" ","response":"range_response_count:1 size:912"}
	{"level":"info","ts":"2024-09-19T18:39:58.071639Z","caller":"traceutil/trace.go:171","msg":"trace[90722266] range","detail":"{range_begin:/registry/events/gcp-auth/gcp-auth-certs-patch.17f6b940c056e1c9; range_end:; response_count:1; response_revision:971; }","duration":"149.519404ms","start":"2024-09-19T18:39:57.922109Z","end":"2024-09-19T18:39:58.071629Z","steps":["trace[90722266] 'agreement among raft nodes before linearized reading'  (duration: 149.366006ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:49:30.795252Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1887}
	{"level":"info","ts":"2024-09-19T18:49:30.820176Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1887,"took":"24.431283ms","hash":1617464157,"current-db-size-bytes":8732672,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":4853760,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-19T18:49:30.820215Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1617464157,"revision":1887,"compact-revision":-1}
	
	
	==> gcp-auth [17e78c6f0cb4] <==
	2024/09/19 18:43:02 Ready to write response ...
	2024/09/19 18:51:04 Ready to marshal response ...
	2024/09/19 18:51:04 Ready to write response ...
	2024/09/19 18:51:04 Ready to marshal response ...
	2024/09/19 18:51:04 Ready to write response ...
	2024/09/19 18:51:12 Ready to marshal response ...
	2024/09/19 18:51:12 Ready to write response ...
	2024/09/19 18:51:14 Ready to marshal response ...
	2024/09/19 18:51:14 Ready to write response ...
	2024/09/19 18:51:18 Ready to marshal response ...
	2024/09/19 18:51:18 Ready to write response ...
	2024/09/19 18:51:21 Ready to marshal response ...
	2024/09/19 18:51:21 Ready to write response ...
	2024/09/19 18:51:21 Ready to marshal response ...
	2024/09/19 18:51:21 Ready to write response ...
	2024/09/19 18:51:21 Ready to marshal response ...
	2024/09/19 18:51:21 Ready to write response ...
	2024/09/19 18:51:32 Ready to marshal response ...
	2024/09/19 18:51:32 Ready to write response ...
	2024/09/19 18:51:47 Ready to marshal response ...
	2024/09/19 18:51:47 Ready to write response ...
	2024/09/19 18:51:55 Ready to marshal response ...
	2024/09/19 18:51:55 Ready to write response ...
	2024/09/19 18:52:03 Ready to marshal response ...
	2024/09/19 18:52:03 Ready to write response ...
	
	
	==> kernel <==
	 18:52:16 up 34 min,  0 users,  load average: 2.17, 0.89, 0.49
	Linux addons-807343 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [d399ae9b2f7d] <==
	W0919 18:42:53.988091       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0919 18:42:53.988253       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0919 18:42:54.082287       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0919 18:42:54.187729       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0919 18:42:54.477350       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0919 18:42:54.768400       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0919 18:51:21.304157       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.22.220"}
	I0919 18:51:25.926806       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0919 18:51:28.318436       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0919 18:51:48.267573       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:51:48.267628       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:51:48.283280       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:51:48.283321       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:51:48.294616       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:51:48.294665       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:51:48.307991       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:51:48.308028       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0919 18:51:49.285199       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0919 18:51:49.308292       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0919 18:51:49.370183       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0919 18:51:55.167707       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0919 18:51:55.320624       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.87.156"}
	I0919 18:51:58.570428       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0919 18:51:59.586084       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0919 18:52:03.863128       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.231.140"}
	
	
	==> kube-controller-manager [32c83be9d618] <==
	I0919 18:52:03.736658       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="29.559µs"
	I0919 18:52:03.771883       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="49.93µs"
	I0919 18:52:05.014800       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="6.24168ms"
	I0919 18:52:05.014883       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="46.008µs"
	I0919 18:52:05.447440       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0919 18:52:05.448654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="5.62µs"
	I0919 18:52:05.470060       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0919 18:52:06.477741       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:06.477783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:06.478548       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:06.478577       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:07.990136       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:07.990170       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:52:08.925246       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0919 18:52:09.829037       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0919 18:52:09.829072       1 shared_informer.go:320] Caches are synced for resource quota
	W0919 18:52:09.855357       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:09.855392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:52:09.967249       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0919 18:52:09.967286       1 shared_informer.go:320] Caches are synced for garbage collector
	I0919 18:52:10.336072       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-807343"
	I0919 18:52:15.236863       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="8.686µs"
	I0919 18:52:15.526959       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0919 18:52:15.947356       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:15.947390       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [969a0f35b949] <==
	I0919 18:39:43.792337       1 server_linux.go:66] "Using iptables proxy"
	I0919 18:39:44.469618       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0919 18:39:44.469761       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 18:39:44.970481       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 18:39:44.970535       1 server_linux.go:169] "Using iptables Proxier"
	I0919 18:39:44.973731       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 18:39:44.974101       1 server.go:483] "Version info" version="v1.31.1"
	I0919 18:39:44.974117       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 18:39:44.976275       1 config.go:199] "Starting service config controller"
	I0919 18:39:44.976291       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 18:39:44.976310       1 config.go:105] "Starting endpoint slice config controller"
	I0919 18:39:44.976315       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 18:39:44.976563       1 config.go:328] "Starting node config controller"
	I0919 18:39:44.976571       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 18:39:44.981240       1 shared_informer.go:320] Caches are synced for node config
	I0919 18:39:45.076846       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 18:39:45.076943       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [75e79c347dfc] <==
	W0919 18:39:32.468462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 18:39:32.468624       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:32.467742       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 18:39:32.468732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:32.467794       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0919 18:39:32.468775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:32.467850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0919 18:39:32.468803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:32.468015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 18:39:32.468841       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:32.468260       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0919 18:39:32.468872       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:32.468259       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0919 18:39:32.468899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:33.306462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 18:39:33.306502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:33.325723       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 18:39:33.325763       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:33.337982       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0919 18:39:33.338011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:33.486558       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 18:39:33.486601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:33.498988       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 18:39:33.499017       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0919 18:39:34.093966       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 19 18:52:09 addons-807343 kubelet[2431]: I0919 18:52:09.058259    2431 scope.go:117] "RemoveContainer" containerID="10bff7ad8724f4a60f545ad0011f539a3a592c1cc18fc36c7ceb4521438903a8"
	Sep 19 18:52:09 addons-807343 kubelet[2431]: E0919 18:52:09.058779    2431 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 10bff7ad8724f4a60f545ad0011f539a3a592c1cc18fc36c7ceb4521438903a8" containerID="10bff7ad8724f4a60f545ad0011f539a3a592c1cc18fc36c7ceb4521438903a8"
	Sep 19 18:52:09 addons-807343 kubelet[2431]: I0919 18:52:09.058819    2431 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"10bff7ad8724f4a60f545ad0011f539a3a592c1cc18fc36c7ceb4521438903a8"} err="failed to get container status \"10bff7ad8724f4a60f545ad0011f539a3a592c1cc18fc36c7ceb4521438903a8\": rpc error: code = Unknown desc = Error response from daemon: No such container: 10bff7ad8724f4a60f545ad0011f539a3a592c1cc18fc36c7ceb4521438903a8"
	Sep 19 18:52:10 addons-807343 kubelet[2431]: I0919 18:52:10.716132    2431 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="097e62a7-8e4d-4361-884e-3f59d6fd556a" path="/var/lib/kubelet/pods/097e62a7-8e4d-4361-884e-3f59d6fd556a/volumes"
	Sep 19 18:52:14 addons-807343 kubelet[2431]: E0919 18:52:14.710119    2431 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="061e7e93-a91f-442f-ab9f-2c492cf63438"
	Sep 19 18:52:14 addons-807343 kubelet[2431]: I0919 18:52:14.936609    2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e48103cd-6304-4934-990b-0d83789f05d3-gcp-creds\") pod \"e48103cd-6304-4934-990b-0d83789f05d3\" (UID: \"e48103cd-6304-4934-990b-0d83789f05d3\") "
	Sep 19 18:52:14 addons-807343 kubelet[2431]: I0919 18:52:14.936676    2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g699b\" (UniqueName: \"kubernetes.io/projected/e48103cd-6304-4934-990b-0d83789f05d3-kube-api-access-g699b\") pod \"e48103cd-6304-4934-990b-0d83789f05d3\" (UID: \"e48103cd-6304-4934-990b-0d83789f05d3\") "
	Sep 19 18:52:14 addons-807343 kubelet[2431]: I0919 18:52:14.936728    2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e48103cd-6304-4934-990b-0d83789f05d3-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "e48103cd-6304-4934-990b-0d83789f05d3" (UID: "e48103cd-6304-4934-990b-0d83789f05d3"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 19 18:52:14 addons-807343 kubelet[2431]: I0919 18:52:14.938403    2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e48103cd-6304-4934-990b-0d83789f05d3-kube-api-access-g699b" (OuterVolumeSpecName: "kube-api-access-g699b") pod "e48103cd-6304-4934-990b-0d83789f05d3" (UID: "e48103cd-6304-4934-990b-0d83789f05d3"). InnerVolumeSpecName "kube-api-access-g699b". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:52:15 addons-807343 kubelet[2431]: I0919 18:52:15.037825    2431 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-g699b\" (UniqueName: \"kubernetes.io/projected/e48103cd-6304-4934-990b-0d83789f05d3-kube-api-access-g699b\") on node \"addons-807343\" DevicePath \"\""
	Sep 19 18:52:15 addons-807343 kubelet[2431]: I0919 18:52:15.037869    2431 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e48103cd-6304-4934-990b-0d83789f05d3-gcp-creds\") on node \"addons-807343\" DevicePath \"\""
	Sep 19 18:52:15 addons-807343 kubelet[2431]: I0919 18:52:15.642191    2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9hnj\" (UniqueName: \"kubernetes.io/projected/073b4ea3-119e-40f8-9331-51fd7dfdf5bf-kube-api-access-b9hnj\") pod \"073b4ea3-119e-40f8-9331-51fd7dfdf5bf\" (UID: \"073b4ea3-119e-40f8-9331-51fd7dfdf5bf\") "
	Sep 19 18:52:15 addons-807343 kubelet[2431]: I0919 18:52:15.642237    2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-475lf\" (UniqueName: \"kubernetes.io/projected/5daab8c5-d486-4f2e-a165-b7129bb49ef1-kube-api-access-475lf\") pod \"5daab8c5-d486-4f2e-a165-b7129bb49ef1\" (UID: \"5daab8c5-d486-4f2e-a165-b7129bb49ef1\") "
	Sep 19 18:52:15 addons-807343 kubelet[2431]: I0919 18:52:15.644304    2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5daab8c5-d486-4f2e-a165-b7129bb49ef1-kube-api-access-475lf" (OuterVolumeSpecName: "kube-api-access-475lf") pod "5daab8c5-d486-4f2e-a165-b7129bb49ef1" (UID: "5daab8c5-d486-4f2e-a165-b7129bb49ef1"). InnerVolumeSpecName "kube-api-access-475lf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:52:15 addons-807343 kubelet[2431]: I0919 18:52:15.644355    2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/073b4ea3-119e-40f8-9331-51fd7dfdf5bf-kube-api-access-b9hnj" (OuterVolumeSpecName: "kube-api-access-b9hnj") pod "073b4ea3-119e-40f8-9331-51fd7dfdf5bf" (UID: "073b4ea3-119e-40f8-9331-51fd7dfdf5bf"). InnerVolumeSpecName "kube-api-access-b9hnj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:52:15 addons-807343 kubelet[2431]: I0919 18:52:15.743245    2431 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-b9hnj\" (UniqueName: \"kubernetes.io/projected/073b4ea3-119e-40f8-9331-51fd7dfdf5bf-kube-api-access-b9hnj\") on node \"addons-807343\" DevicePath \"\""
	Sep 19 18:52:15 addons-807343 kubelet[2431]: I0919 18:52:15.743285    2431 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-475lf\" (UniqueName: \"kubernetes.io/projected/5daab8c5-d486-4f2e-a165-b7129bb49ef1-kube-api-access-475lf\") on node \"addons-807343\" DevicePath \"\""
	Sep 19 18:52:16 addons-807343 kubelet[2431]: I0919 18:52:16.117485    2431 scope.go:117] "RemoveContainer" containerID="a12a2a40d4e0df140d828a994f75bb9f8e2ae30b36684e6fd3444f373786fa15"
	Sep 19 18:52:16 addons-807343 kubelet[2431]: I0919 18:52:16.137448    2431 scope.go:117] "RemoveContainer" containerID="a12a2a40d4e0df140d828a994f75bb9f8e2ae30b36684e6fd3444f373786fa15"
	Sep 19 18:52:16 addons-807343 kubelet[2431]: E0919 18:52:16.138167    2431 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: a12a2a40d4e0df140d828a994f75bb9f8e2ae30b36684e6fd3444f373786fa15" containerID="a12a2a40d4e0df140d828a994f75bb9f8e2ae30b36684e6fd3444f373786fa15"
	Sep 19 18:52:16 addons-807343 kubelet[2431]: I0919 18:52:16.138202    2431 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"a12a2a40d4e0df140d828a994f75bb9f8e2ae30b36684e6fd3444f373786fa15"} err="failed to get container status \"a12a2a40d4e0df140d828a994f75bb9f8e2ae30b36684e6fd3444f373786fa15\": rpc error: code = Unknown desc = Error response from daemon: No such container: a12a2a40d4e0df140d828a994f75bb9f8e2ae30b36684e6fd3444f373786fa15"
	Sep 19 18:52:16 addons-807343 kubelet[2431]: I0919 18:52:16.138224    2431 scope.go:117] "RemoveContainer" containerID="d05dc574a112f35aae1273679b533b15af41e83fc6023984721ff06773c375d4"
	Sep 19 18:52:16 addons-807343 kubelet[2431]: I0919 18:52:16.176008    2431 scope.go:117] "RemoveContainer" containerID="d05dc574a112f35aae1273679b533b15af41e83fc6023984721ff06773c375d4"
	Sep 19 18:52:16 addons-807343 kubelet[2431]: E0919 18:52:16.176652    2431 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: d05dc574a112f35aae1273679b533b15af41e83fc6023984721ff06773c375d4" containerID="d05dc574a112f35aae1273679b533b15af41e83fc6023984721ff06773c375d4"
	Sep 19 18:52:16 addons-807343 kubelet[2431]: I0919 18:52:16.176686    2431 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"d05dc574a112f35aae1273679b533b15af41e83fc6023984721ff06773c375d4"} err="failed to get container status \"d05dc574a112f35aae1273679b533b15af41e83fc6023984721ff06773c375d4\": rpc error: code = Unknown desc = Error response from daemon: No such container: d05dc574a112f35aae1273679b533b15af41e83fc6023984721ff06773c375d4"
	
	
	==> storage-provisioner [67149d9e3be2] <==
	I0919 18:39:48.488021       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 18:39:48.577006       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 18:39:48.577054       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 18:39:48.584649       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 18:39:48.584785       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-807343_706b36b6-f72a-4b59-a5a2-5eba49b7f960!
	I0919 18:39:48.585520       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"54f97acc-3900-4852-a79f-87c3d35f67c3", APIVersion:"v1", ResourceVersion:"624", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-807343_706b36b6-f72a-4b59-a5a2-5eba49b7f960 became leader
	I0919 18:39:48.685092       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-807343_706b36b6-f72a-4b59-a5a2-5eba49b7f960!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-807343 -n addons-807343
helpers_test.go:261: (dbg) Run:  kubectl --context addons-807343 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-807343 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-807343 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-807343/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 18:43:02 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rl8zt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rl8zt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m14s                  default-scheduler  Successfully assigned default/busybox to addons-807343
	  Normal   Pulling    7m43s (x4 over 9m14s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m43s (x4 over 9m14s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m43s (x4 over 9m14s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m31s (x6 over 9m13s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m6s (x21 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (72.33s)

                                                
                                    

Test pass (322/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 4.74
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 3.96
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.05
18 TestDownloadOnly/v1.31.1/DeleteAll 0.18
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.11
20 TestDownloadOnlyKic 0.94
21 TestBinaryMirror 0.73
22 TestOffline 71.79
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 207.81
29 TestAddons/serial/Volcano 36.65
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 17.64
35 TestAddons/parallel/InspektorGadget 11.83
36 TestAddons/parallel/MetricsServer 5.89
37 TestAddons/parallel/HelmTiller 9.36
39 TestAddons/parallel/CSI 43.95
40 TestAddons/parallel/Headlamp 16.25
41 TestAddons/parallel/CloudSpanner 5.4
42 TestAddons/parallel/LocalPath 50.95
43 TestAddons/parallel/NvidiaDevicePlugin 6.39
44 TestAddons/parallel/Yakd 10.72
45 TestAddons/StoppedEnableDisable 5.8
46 TestCertOptions 32.38
47 TestCertExpiration 228.49
48 TestDockerFlags 34.11
49 TestForceSystemdFlag 32.41
50 TestForceSystemdEnv 37.16
52 TestKVMDriverInstallOrUpdate 1.23
56 TestErrorSpam/setup 21.07
57 TestErrorSpam/start 0.53
58 TestErrorSpam/status 0.83
59 TestErrorSpam/pause 1.12
60 TestErrorSpam/unpause 1.28
61 TestErrorSpam/stop 10.81
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 63.49
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 32.61
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.06
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.28
73 TestFunctional/serial/CacheCmd/cache/add_local 0.65
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.2
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
81 TestFunctional/serial/ExtraConfig 39.52
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 0.89
84 TestFunctional/serial/LogsFileCmd 0.91
85 TestFunctional/serial/InvalidService 4.08
87 TestFunctional/parallel/ConfigCmd 0.33
88 TestFunctional/parallel/DashboardCmd 9.53
89 TestFunctional/parallel/DryRun 0.3
90 TestFunctional/parallel/InternationalLanguage 0.13
91 TestFunctional/parallel/StatusCmd 0.83
95 TestFunctional/parallel/ServiceCmdConnect 16.66
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 41.75
99 TestFunctional/parallel/SSHCmd 0.61
100 TestFunctional/parallel/CpCmd 1.7
101 TestFunctional/parallel/MySQL 21.74
102 TestFunctional/parallel/FileSync 0.28
103 TestFunctional/parallel/CertSync 1.45
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.29
111 TestFunctional/parallel/License 0.2
112 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
113 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
114 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
115 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
116 TestFunctional/parallel/ImageCommands/ImageBuild 3.68
117 TestFunctional/parallel/ImageCommands/Setup 0.43
118 TestFunctional/parallel/Version/short 0.05
119 TestFunctional/parallel/Version/components 0.47
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.07
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.21
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.98
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 0.94
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.81
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
133 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
137 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
138 TestFunctional/parallel/ServiceCmd/DeployApp 16.15
139 TestFunctional/parallel/DockerEnv/bash 0.83
140 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
141 TestFunctional/parallel/ProfileCmd/profile_list 0.38
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
145 TestFunctional/parallel/MountCmd/any-port 6.7
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
147 TestFunctional/parallel/ServiceCmd/List 0.91
148 TestFunctional/parallel/ServiceCmd/JSONOutput 1
149 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
150 TestFunctional/parallel/ServiceCmd/Format 0.52
151 TestFunctional/parallel/ServiceCmd/URL 0.51
152 TestFunctional/parallel/MountCmd/specific-port 1.74
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.75
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 99.48
161 TestMultiControlPlane/serial/DeployApp 4.41
162 TestMultiControlPlane/serial/PingHostFromPods 0.99
163 TestMultiControlPlane/serial/AddWorkerNode 20.65
164 TestMultiControlPlane/serial/NodeLabels 0.06
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.8
166 TestMultiControlPlane/serial/CopyFile 15.07
167 TestMultiControlPlane/serial/StopSecondaryNode 11.24
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.64
169 TestMultiControlPlane/serial/RestartSecondaryNode 34.85
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.85
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 212.97
172 TestMultiControlPlane/serial/DeleteSecondaryNode 9.19
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.65
174 TestMultiControlPlane/serial/StopCluster 32.59
175 TestMultiControlPlane/serial/RestartCluster 81.28
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
177 TestMultiControlPlane/serial/AddSecondaryNode 34.93
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
181 TestImageBuild/serial/Setup 20.5
182 TestImageBuild/serial/NormalBuild 1.19
183 TestImageBuild/serial/BuildWithBuildArg 0.71
184 TestImageBuild/serial/BuildWithDockerIgnore 0.53
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.53
189 TestJSONOutput/start/Command 34.51
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.49
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.43
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 10.81
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.18
214 TestKicCustomNetwork/create_custom_network 22.21
215 TestKicCustomNetwork/use_default_bridge_network 22.44
216 TestKicExistingNetwork 24.7
217 TestKicCustomSubnet 23.99
218 TestKicStaticIP 26.03
219 TestMainNoArgs 0.04
220 TestMinikubeProfile 50.22
223 TestMountStart/serial/StartWithMountFirst 6.46
224 TestMountStart/serial/VerifyMountFirst 0.23
225 TestMountStart/serial/StartWithMountSecond 6.16
226 TestMountStart/serial/VerifyMountSecond 0.24
227 TestMountStart/serial/DeleteFirst 1.45
228 TestMountStart/serial/VerifyMountPostDelete 0.24
229 TestMountStart/serial/Stop 1.17
230 TestMountStart/serial/RestartStopped 7.67
231 TestMountStart/serial/VerifyMountPostStop 0.23
234 TestMultiNode/serial/FreshStart2Nodes 56.97
235 TestMultiNode/serial/DeployApp2Nodes 47.34
236 TestMultiNode/serial/PingHostFrom2Pods 0.67
237 TestMultiNode/serial/AddNode 18.51
238 TestMultiNode/serial/MultiNodeLabels 0.06
239 TestMultiNode/serial/ProfileList 0.59
240 TestMultiNode/serial/CopyFile 8.74
241 TestMultiNode/serial/StopNode 2.05
242 TestMultiNode/serial/StartAfterStop 9.52
243 TestMultiNode/serial/RestartKeepsNodes 99.3
244 TestMultiNode/serial/DeleteNode 5.13
245 TestMultiNode/serial/StopMultiNode 21.38
246 TestMultiNode/serial/RestartMultiNode 53.55
247 TestMultiNode/serial/ValidateNameConflict 24.61
252 TestPreload 93.86
254 TestScheduledStopUnix 94.73
255 TestSkaffold 98.19
257 TestInsufficientStorage 9.45
258 TestRunningBinaryUpgrade 102.04
260 TestKubernetesUpgrade 326.09
261 TestMissingContainerUpgrade 130.25
274 TestPause/serial/Start 34.92
283 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
284 TestNoKubernetes/serial/StartWithK8s 26.48
285 TestPause/serial/SecondStartNoReconfiguration 33.09
286 TestNoKubernetes/serial/StartWithStopK8s 7.27
287 TestNoKubernetes/serial/Start 8.56
288 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
289 TestNoKubernetes/serial/ProfileList 1.75
290 TestPause/serial/Pause 0.61
291 TestPause/serial/VerifyStatus 0.31
292 TestNoKubernetes/serial/Stop 1.21
293 TestPause/serial/Unpause 0.46
294 TestPause/serial/PauseAgain 0.69
295 TestNoKubernetes/serial/StartNoArgs 8.35
296 TestPause/serial/DeletePaused 4.18
297 TestPause/serial/VerifyDeletedResources 0.72
298 TestStoppedBinaryUpgrade/Setup 0.53
299 TestStoppedBinaryUpgrade/Upgrade 62.24
300 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
301 TestNetworkPlugins/group/auto/Start 62.8
302 TestStoppedBinaryUpgrade/MinikubeLogs 1.05
303 TestNetworkPlugins/group/kindnet/Start 45.61
304 TestNetworkPlugins/group/auto/KubeletFlags 0.37
305 TestNetworkPlugins/group/auto/NetCatPod 10.56
306 TestNetworkPlugins/group/auto/DNS 0.13
307 TestNetworkPlugins/group/auto/Localhost 0.11
308 TestNetworkPlugins/group/auto/HairPin 0.11
309 TestNetworkPlugins/group/calico/Start 58.99
310 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
311 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
312 TestNetworkPlugins/group/kindnet/NetCatPod 10.23
313 TestNetworkPlugins/group/custom-flannel/Start 47.66
314 TestNetworkPlugins/group/kindnet/DNS 0.2
315 TestNetworkPlugins/group/kindnet/Localhost 0.14
316 TestNetworkPlugins/group/kindnet/HairPin 0.16
317 TestNetworkPlugins/group/false/Start 69.9
318 TestNetworkPlugins/group/calico/ControllerPod 6.01
319 TestNetworkPlugins/group/calico/KubeletFlags 0.32
320 TestNetworkPlugins/group/calico/NetCatPod 9.2
321 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
322 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.22
323 TestNetworkPlugins/group/calico/DNS 0.14
324 TestNetworkPlugins/group/calico/Localhost 0.12
325 TestNetworkPlugins/group/calico/HairPin 0.11
326 TestNetworkPlugins/group/custom-flannel/DNS 0.16
327 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
328 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
329 TestNetworkPlugins/group/enable-default-cni/Start 66.04
330 TestNetworkPlugins/group/flannel/Start 43.72
331 TestNetworkPlugins/group/false/KubeletFlags 0.26
332 TestNetworkPlugins/group/false/NetCatPod 10.21
333 TestNetworkPlugins/group/false/DNS 0.12
334 TestNetworkPlugins/group/false/Localhost 0.11
335 TestNetworkPlugins/group/false/HairPin 0.12
336 TestNetworkPlugins/group/flannel/ControllerPod 6
337 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
338 TestNetworkPlugins/group/kubenet/Start 67.79
339 TestNetworkPlugins/group/flannel/NetCatPod 11.21
340 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
341 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.19
342 TestNetworkPlugins/group/flannel/DNS 0.15
343 TestNetworkPlugins/group/flannel/Localhost 0.15
344 TestNetworkPlugins/group/flannel/HairPin 0.13
345 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
346 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
347 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
348 TestNetworkPlugins/group/bridge/Start 33.29
350 TestStartStop/group/old-k8s-version/serial/FirstStart 132.15
352 TestStartStop/group/no-preload/serial/FirstStart 68.77
353 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
354 TestNetworkPlugins/group/bridge/NetCatPod 10.25
355 TestNetworkPlugins/group/kubenet/KubeletFlags 0.29
356 TestNetworkPlugins/group/kubenet/NetCatPod 8.21
357 TestNetworkPlugins/group/bridge/DNS 21.07
358 TestNetworkPlugins/group/kubenet/DNS 0.12
359 TestNetworkPlugins/group/kubenet/Localhost 0.1
360 TestNetworkPlugins/group/kubenet/HairPin 0.11
362 TestStartStop/group/embed-certs/serial/FirstStart 67.18
363 TestNetworkPlugins/group/bridge/Localhost 0.12
364 TestNetworkPlugins/group/bridge/HairPin 0.13
365 TestStartStop/group/no-preload/serial/DeployApp 8.29
367 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 35.3
368 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.75
369 TestStartStop/group/no-preload/serial/Stop 11.03
370 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.26
371 TestStartStop/group/no-preload/serial/SecondStart 263.07
372 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.25
373 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.75
374 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.77
375 TestStartStop/group/embed-certs/serial/DeployApp 9.25
376 TestStartStop/group/old-k8s-version/serial/DeployApp 7.39
377 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
378 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.88
379 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 262.55
380 TestStartStop/group/embed-certs/serial/Stop 10.8
381 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.89
382 TestStartStop/group/old-k8s-version/serial/Stop 10.88
383 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
384 TestStartStop/group/embed-certs/serial/SecondStart 265.66
385 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
386 TestStartStop/group/old-k8s-version/serial/SecondStart 130.96
387 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
388 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
389 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
390 TestStartStop/group/old-k8s-version/serial/Pause 2.33
392 TestStartStop/group/newest-cni/serial/FirstStart 28.01
393 TestStartStop/group/newest-cni/serial/DeployApp 0
394 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.76
395 TestStartStop/group/newest-cni/serial/Stop 10.73
396 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
397 TestStartStop/group/newest-cni/serial/SecondStart 14.12
398 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
399 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
400 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
401 TestStartStop/group/newest-cni/serial/Pause 2.5
402 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
403 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
404 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
405 TestStartStop/group/no-preload/serial/Pause 2.24
406 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
407 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
408 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
409 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.37
410 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
411 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
412 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
413 TestStartStop/group/embed-certs/serial/Pause 2.2
x
+
TestDownloadOnly/v1.20.0/json-events (4.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-498008 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-498008 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.737088512s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (4.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0919 18:38:51.132003   14476 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0919 18:38:51.132084   14476 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-7708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-498008
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-498008: exit status 85 (54.319388ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-498008 | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC |          |
	|         | -p download-only-498008        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:38:46
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:38:46.433943   14488 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:38:46.434081   14488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:38:46.434093   14488 out.go:358] Setting ErrFile to fd 2...
	I0919 18:38:46.434100   14488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:38:46.434280   14488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7708/.minikube/bin
	W0919 18:38:46.434386   14488 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19664-7708/.minikube/config/config.json: open /home/jenkins/minikube-integration/19664-7708/.minikube/config/config.json: no such file or directory
	I0919 18:38:46.434962   14488 out.go:352] Setting JSON to true
	I0919 18:38:46.435877   14488 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1268,"bootTime":1726769858,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 18:38:46.435965   14488 start.go:139] virtualization: kvm guest
	I0919 18:38:46.438287   14488 out.go:97] [download-only-498008] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0919 18:38:46.438375   14488 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19664-7708/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 18:38:46.438412   14488 notify.go:220] Checking for updates...
	I0919 18:38:46.439523   14488 out.go:169] MINIKUBE_LOCATION=19664
	I0919 18:38:46.440657   14488 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:38:46.441845   14488 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19664-7708/kubeconfig
	I0919 18:38:46.443094   14488 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7708/.minikube
	I0919 18:38:46.444236   14488 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0919 18:38:46.446215   14488 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 18:38:46.446380   14488 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:38:46.466960   14488 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 18:38:46.467077   14488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:38:46.790106   14488 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-19 18:38:46.780548963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 18:38:46.790208   14488 docker.go:318] overlay module found
	I0919 18:38:46.791677   14488 out.go:97] Using the docker driver based on user configuration
	I0919 18:38:46.791698   14488 start.go:297] selected driver: docker
	I0919 18:38:46.791703   14488 start.go:901] validating driver "docker" against <nil>
	I0919 18:38:46.791780   14488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:38:46.837647   14488 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-19 18:38:46.829359786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 18:38:46.837821   14488 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:38:46.838352   14488 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0919 18:38:46.838529   14488 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 18:38:46.840160   14488 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-498008 host does not exist
	  To start a cluster, run: "minikube start -p download-only-498008"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-498008
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (3.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-328983 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-328983 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (3.960076425s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (3.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0919 18:38:55.448775   14476 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0919 18:38:55.448815   14476 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-7708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-328983
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-328983: exit status 85 (51.760078ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-498008 | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC |                     |
	|         | -p download-only-498008        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC | 19 Sep 24 18:38 UTC |
	| delete  | -p download-only-498008        | download-only-498008 | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC | 19 Sep 24 18:38 UTC |
	| start   | -o=json --download-only        | download-only-328983 | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC |                     |
	|         | -p download-only-328983        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:38:51
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:38:51.523535   14832 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:38:51.523635   14832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:38:51.523645   14832 out.go:358] Setting ErrFile to fd 2...
	I0919 18:38:51.523649   14832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:38:51.523795   14832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7708/.minikube/bin
	I0919 18:38:51.524319   14832 out.go:352] Setting JSON to true
	I0919 18:38:51.525138   14832 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1273,"bootTime":1726769858,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 18:38:51.525220   14832 start.go:139] virtualization: kvm guest
	I0919 18:38:51.527790   14832 out.go:97] [download-only-328983] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 18:38:51.527918   14832 notify.go:220] Checking for updates...
	I0919 18:38:51.529099   14832 out.go:169] MINIKUBE_LOCATION=19664
	I0919 18:38:51.530266   14832 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:38:51.531327   14832 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19664-7708/kubeconfig
	I0919 18:38:51.532553   14832 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7708/.minikube
	I0919 18:38:51.533632   14832 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0919 18:38:51.535787   14832 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 18:38:51.536027   14832 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:38:51.558310   14832 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 18:38:51.558419   14832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:38:51.602782   14832 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-19 18:38:51.594353646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 18:38:51.602872   14832 docker.go:318] overlay module found
	I0919 18:38:51.604427   14832 out.go:97] Using the docker driver based on user configuration
	I0919 18:38:51.604446   14832 start.go:297] selected driver: docker
	I0919 18:38:51.604451   14832 start.go:901] validating driver "docker" against <nil>
	I0919 18:38:51.604518   14832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:38:51.647883   14832 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-19 18:38:51.639768162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 18:38:51.648058   14832 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:38:51.648549   14832 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0919 18:38:51.648700   14832 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 18:38:51.650502   14832 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-328983 host does not exist
	  To start a cluster, run: "minikube start -p download-only-328983"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-328983
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.94s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-260378 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-260378" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-260378
--- PASS: TestDownloadOnlyKic (0.94s)

                                                
                                    
x
+
TestBinaryMirror (0.73s)

                                                
                                                
=== RUN   TestBinaryMirror
I0919 18:38:56.958943   14476 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-546000 --alsologtostderr --binary-mirror http://127.0.0.1:33185 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-546000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-546000
--- PASS: TestBinaryMirror (0.73s)

                                                
                                    
x
+
TestOffline (71.79s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-930939 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-930939 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m9.601023613s)
helpers_test.go:175: Cleaning up "offline-docker-930939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-930939
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-930939: (2.18633829s)
--- PASS: TestOffline (71.79s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-807343
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-807343: exit status 85 (50.112075ms)

                                                
                                                
-- stdout --
	* Profile "addons-807343" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-807343"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-807343
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-807343: exit status 85 (48.23567ms)

                                                
                                                
-- stdout --
	* Profile "addons-807343" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-807343"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (207.81s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-807343 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-807343 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m27.809139109s)
--- PASS: TestAddons/Setup (207.81s)

                                                
                                    
x
+
TestAddons/serial/Volcano (36.65s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 11.108027ms
addons_test.go:905: volcano-admission stabilized in 11.15302ms
addons_test.go:897: volcano-scheduler stabilized in 11.213425ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-zhvqc" [65ae989e-7454-49fc-a903-326c3b659551] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.002873049s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-mw4v8" [556213a3-8e71-48de-8f5f-0ef00ac05c63] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003037818s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-npqvh" [09b6914d-4200-435f-b046-ff65781c0c6c] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003442117s
addons_test.go:932: (dbg) Run:  kubectl --context addons-807343 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-807343 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-807343 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [c0b9c57b-faf0-4b9e-b92f-67df0d0fdcab] Pending
helpers_test.go:344: "test-job-nginx-0" [c0b9c57b-faf0-4b9e-b92f-67df0d0fdcab] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [c0b9c57b-faf0-4b9e-b92f-67df0d0fdcab] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.00284857s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-807343 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-807343 addons disable volcano --alsologtostderr -v=1: (10.329902541s)
--- PASS: TestAddons/serial/Volcano (36.65s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-807343 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-807343 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-807343 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-807343 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-807343 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d5bd6923-a904-471f-b58b-f0e78d290ac0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d5bd6923-a904-471f-b58b-f0e78d290ac0] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.00351678s
I0919 18:52:03.331124   14476 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-807343 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-807343 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-807343 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-807343 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-807343 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-807343 addons disable ingress --alsologtostderr -v=1: (7.530845457s)
--- PASS: TestAddons/parallel/Ingress (17.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4wwwx" [523fc257-b96d-4f86-a586-5d62872e432b] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003691444s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-807343
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-807343: (5.824438667s)
--- PASS: TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.89s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.323034ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-d74dx" [d90ed638-b34d-4a70-a846-898f37d3a262] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.029247755s
addons_test.go:417: (dbg) Run:  kubectl --context addons-807343 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-807343 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.89s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.36s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.10376ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-vmsvx" [3388a43f-3bd2-4f3a-8975-ecd10db08a16] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003264106s
addons_test.go:475: (dbg) Run:  kubectl --context addons-807343 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-807343 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.909585971s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-807343 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.315354ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-807343 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-807343 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6c0e39f8-f4f2-418c-bc3f-3d5bf28ccfef] Pending
helpers_test.go:344: "task-pv-pod" [6c0e39f8-f4f2-418c-bc3f-3d5bf28ccfef] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6c0e39f8-f4f2-418c-bc3f-3d5bf28ccfef] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003452945s
addons_test.go:590: (dbg) Run:  kubectl --context addons-807343 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-807343 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-807343 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-807343 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-807343 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-807343 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-807343 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [79a31de1-1afa-47ef-a267-f355fa13729d] Pending
helpers_test.go:344: "task-pv-pod-restore" [79a31de1-1afa-47ef-a267-f355fa13729d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [79a31de1-1afa-47ef-a267-f355fa13729d] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004158602s
addons_test.go:632: (dbg) Run:  kubectl --context addons-807343 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-807343 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-807343 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-807343 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-807343 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.505587442s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-807343 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (43.95s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-807343 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-tn5mm" [bcfeb304-9d8d-4321-8814-976f8d2cde8f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-tn5mm" [bcfeb304-9d8d-4321-8814-976f8d2cde8f] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003360562s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-807343 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-807343 addons disable headlamp --alsologtostderr -v=1: (5.593326256s)
--- PASS: TestAddons/parallel/Headlamp (16.25s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-llgpg" [10bc7acf-acac-4372-bb58-f50a829fe5a5] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003727126s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-807343
--- PASS: TestAddons/parallel/CloudSpanner (5.40s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (50.95s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-807343 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-807343 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807343 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [af3af28e-9161-453d-b575-3d0c2070eec5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [af3af28e-9161-453d-b575-3d0c2070eec5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [af3af28e-9161-453d-b575-3d0c2070eec5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.002983166s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-807343 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-807343 ssh "cat /opt/local-path-provisioner/pvc-ac5b37a8-6b22-43fd-8e57-431a7ab03924_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-807343 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-807343 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-807343 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-807343 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.130867706s)
--- PASS: TestAddons/parallel/LocalPath (50.95s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.39s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4rj76" [0c3f2ba6-3e70-4d40-844b-605e747b7435] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003020592s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-807343
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.39s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
I0919 18:51:04.571039   14476 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-j6c7w" [26dab240-3cef-40a9-a292-8d50f275bc3f] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003907398s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-807343 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-807343 addons disable yakd --alsologtostderr -v=1: (5.717465469s)
--- PASS: TestAddons/parallel/Yakd (10.72s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.8s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-807343
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-807343: (5.577960917s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-807343
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-807343
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-807343
--- PASS: TestAddons/StoppedEnableDisable (5.80s)

                                                
                                    
x
+
TestCertOptions (32.38s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-171585 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-171585 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (28.660439651s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-171585 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-171585 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-171585 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-171585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-171585
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-171585: (3.057145552s)
--- PASS: TestCertOptions (32.38s)

                                                
                                    
x
+
TestCertExpiration (228.49s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-292827 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-292827 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (25.474932115s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-292827 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0919 19:25:43.085563   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/skaffold-217660/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:25:48.207679   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/skaffold-217660/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-292827 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (20.843167902s)
helpers_test.go:175: Cleaning up "cert-expiration-292827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-292827
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-292827: (2.168317402s)
--- PASS: TestCertExpiration (228.49s)

                                                
                                    
x
+
TestDockerFlags (34.11s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-578998 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-578998 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (29.79450992s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-578998 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-578998 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-578998" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-578998
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-578998: (3.600715422s)
--- PASS: TestDockerFlags (34.11s)

                                                
                                    
x
+
TestForceSystemdFlag (32.41s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-481562 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-481562 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (28.86663795s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-481562 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-481562" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-481562
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-481562: (3.094863257s)
--- PASS: TestForceSystemdFlag (32.41s)

                                                
                                    
x
+
TestForceSystemdEnv (37.16s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-010279 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-010279 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (34.630986013s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-010279 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-010279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-010279
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-010279: (2.245173868s)
--- PASS: TestForceSystemdEnv (37.16s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.23s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0919 19:21:04.898104   14476 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0919 19:21:04.898560   14476 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0919 19:21:04.926607   14476 install.go:62] docker-machine-driver-kvm2: exit status 1
W0919 19:21:04.926921   14476 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0919 19:21:04.926984   14476 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1085493684/001/docker-machine-driver-kvm2
I0919 19:21:05.070155   14476 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1085493684/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640] Decompressors:map[bz2:0xc000881bd0 gz:0xc000881bd8 tar:0xc000881b80 tar.bz2:0xc000881b90 tar.gz:0xc000881ba0 tar.xz:0xc000881bb0 tar.zst:0xc000881bc0 tbz2:0xc000881b90 tgz:0xc000881ba0 txz:0xc000881bb0 tzst:0xc000881bc0 xz:0xc000881be0 zip:0xc000881bf0 zst:0xc000881be8] Getters:map[file:0xc000ba92d0 http:0xc000053590 https:0xc000053680] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0919 19:21:05.070199   14476 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1085493684/001/docker-machine-driver-kvm2
I0919 19:21:05.648466   14476 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0919 19:21:05.648547   14476 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0919 19:21:05.676434   14476 install.go:137] /home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0919 19:21:05.676465   14476 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0919 19:21:05.676535   14476 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0919 19:21:05.676564   14476 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1085493684/002/docker-machine-driver-kvm2
I0919 19:21:05.700025   14476 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1085493684/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640] Decompressors:map[bz2:0xc000881bd0 gz:0xc000881bd8 tar:0xc000881b80 tar.bz2:0xc000881b90 tar.gz:0xc000881ba0 tar.xz:0xc000881bb0 tar.zst:0xc000881bc0 tbz2:0xc000881b90 tgz:0xc000881ba0 txz:0xc000881bb0 tzst:0xc000881bc0 xz:0xc000881be0 zip:0xc000881bf0 zst:0xc000881be8] Getters:map[file:0xc00097e540 http:0xc00078e780 https:0xc00078e7d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0919 19:21:05.700081   14476 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1085493684/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.23s)

                                                
                                    
x
+
TestErrorSpam/setup (21.07s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-464702 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-464702 --driver=docker  --container-runtime=docker
E0919 18:52:25.545808   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:52:25.552400   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:52:25.563750   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:52:25.585927   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:52:25.627403   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:52:25.708848   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:52:25.870467   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:52:26.192194   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:52:26.834281   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:52:28.116286   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:52:30.679164   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:52:35.800652   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:52:46.042063   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-464702 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-464702 --driver=docker  --container-runtime=docker: (21.070054005s)
--- PASS: TestErrorSpam/setup (21.07s)

                                                
                                    
x
+
TestErrorSpam/start (0.53s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-464702 --log_dir /tmp/nospam-464702 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-464702 --log_dir /tmp/nospam-464702 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-464702 --log_dir /tmp/nospam-464702 start --dry-run
--- PASS: TestErrorSpam/start (0.53s)

                                                
                                    
x
+
TestErrorSpam/status (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-464702 --log_dir /tmp/nospam-464702 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-464702 --log_dir /tmp/nospam-464702 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-464702 --log_dir /tmp/nospam-464702 status
--- PASS: TestErrorSpam/status (0.83s)

                                                
                                    
x
+
TestErrorSpam/pause (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-464702 --log_dir /tmp/nospam-464702 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-464702 --log_dir /tmp/nospam-464702 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-464702 --log_dir /tmp/nospam-464702 pause
--- PASS: TestErrorSpam/pause (1.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.28s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-464702 --log_dir /tmp/nospam-464702 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-464702 --log_dir /tmp/nospam-464702 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-464702 --log_dir /tmp/nospam-464702 unpause
--- PASS: TestErrorSpam/unpause (1.28s)

                                                
                                    
x
+
TestErrorSpam/stop (10.81s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-464702 --log_dir /tmp/nospam-464702 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-464702 --log_dir /tmp/nospam-464702 stop: (10.643940683s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-464702 --log_dir /tmp/nospam-464702 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-464702 --log_dir /tmp/nospam-464702 stop
--- PASS: TestErrorSpam/stop (10.81s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19664-7708/.minikube/files/etc/test/nested/copy/14476/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (63.49s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-847211 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
E0919 18:53:06.523420   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:53:47.485314   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-847211 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m3.491360054s)
--- PASS: TestFunctional/serial/StartWithProxy (63.49s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.61s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0919 18:54:06.066295   14476 config.go:182] Loaded profile config "functional-847211": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-847211 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-847211 --alsologtostderr -v=8: (32.608999104s)
functional_test.go:663: soft start took 32.610414502s for "functional-847211" cluster.
I0919 18:54:38.678320   14476 config.go:182] Loaded profile config "functional-847211": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (32.61s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-847211 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-847211 /tmp/TestFunctionalserialCacheCmdcacheadd_local4128041183/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 cache add minikube-local-cache-test:functional-847211
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 cache delete minikube-local-cache-test:functional-847211
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-847211
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-847211 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (253.012675ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 kubectl -- --context functional-847211 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-847211 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.52s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-847211 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0919 18:55:09.407806   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-847211 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.5223191s)
functional_test.go:761: restart took 39.522423332s for "functional-847211" cluster.
I0919 18:55:23.051520   14476 config.go:182] Loaded profile config "functional-847211": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (39.52s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-847211 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 logs
--- PASS: TestFunctional/serial/LogsCmd (0.89s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 logs --file /tmp/TestFunctionalserialLogsFileCmd1133932955/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.91s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.08s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-847211 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-847211
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-847211: exit status 115 (312.994049ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31343 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-847211 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-847211 config get cpus: exit status 14 (71.833983ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-847211 config get cpus: exit status 14 (55.741152ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-847211 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-847211 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 69579: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.53s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-847211 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-847211 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (126.473355ms)

                                                
                                                
-- stdout --
	* [functional-847211] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-7708/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7708/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 18:55:55.530697   68823 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:55:55.530816   68823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:55:55.530826   68823 out.go:358] Setting ErrFile to fd 2...
	I0919 18:55:55.530832   68823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:55:55.531017   68823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7708/.minikube/bin
	I0919 18:55:55.531589   68823 out.go:352] Setting JSON to false
	I0919 18:55:55.532683   68823 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2297,"bootTime":1726769858,"procs":403,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 18:55:55.532775   68823 start.go:139] virtualization: kvm guest
	I0919 18:55:55.534457   68823 out.go:177] * [functional-847211] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 18:55:55.535524   68823 notify.go:220] Checking for updates...
	I0919 18:55:55.535539   68823 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 18:55:55.536592   68823 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:55:55.537631   68823 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7708/kubeconfig
	I0919 18:55:55.538719   68823 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7708/.minikube
	I0919 18:55:55.539763   68823 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 18:55:55.540657   68823 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 18:55:55.541929   68823 config.go:182] Loaded profile config "functional-847211": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 18:55:55.542348   68823 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:55:55.563957   68823 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 18:55:55.564036   68823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:55:55.609023   68823 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-19 18:55:55.600623129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 18:55:55.609128   68823 docker.go:318] overlay module found
	I0919 18:55:55.610980   68823 out.go:177] * Using the docker driver based on existing profile
	I0919 18:55:55.612076   68823 start.go:297] selected driver: docker
	I0919 18:55:55.612088   68823 start.go:901] validating driver "docker" against &{Name:functional-847211 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-847211 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:55:55.612171   68823 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 18:55:55.613871   68823 out.go:201] 
	W0919 18:55:55.614843   68823 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0919 18:55:55.615834   68823 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-847211 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-847211 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-847211 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (130.502372ms)

                                                
                                                
-- stdout --
	* [functional-847211] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-7708/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7708/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 18:55:55.402470   68747 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:55:55.402557   68747 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:55:55.402565   68747 out.go:358] Setting ErrFile to fd 2...
	I0919 18:55:55.402569   68747 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:55:55.402805   68747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7708/.minikube/bin
	I0919 18:55:55.403376   68747 out.go:352] Setting JSON to false
	I0919 18:55:55.404532   68747 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2297,"bootTime":1726769858,"procs":403,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 18:55:55.404612   68747 start.go:139] virtualization: kvm guest
	I0919 18:55:55.406627   68747 out.go:177] * [functional-847211] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0919 18:55:55.407786   68747 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 18:55:55.407815   68747 notify.go:220] Checking for updates...
	I0919 18:55:55.409896   68747 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:55:55.411026   68747 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7708/kubeconfig
	I0919 18:55:55.412050   68747 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7708/.minikube
	I0919 18:55:55.413489   68747 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 18:55:55.414692   68747 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 18:55:55.416084   68747 config.go:182] Loaded profile config "functional-847211": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 18:55:55.416509   68747 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:55:55.436776   68747 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 18:55:55.436889   68747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:55:55.482074   68747 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-19 18:55:55.472957045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 18:55:55.482174   68747 docker.go:318] overlay module found
	I0919 18:55:55.484006   68747 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0919 18:55:55.485131   68747 start.go:297] selected driver: docker
	I0919 18:55:55.485149   68747 start.go:901] validating driver "docker" against &{Name:functional-847211 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-847211 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:55:55.485237   68747 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 18:55:55.487151   68747 out.go:201] 
	W0919 18:55:55.488145   68747 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0919 18:55:55.489143   68747 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (16.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-847211 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-847211 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-spvmk" [308d17ed-e32d-4b07-8849-850afbc75e0b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-spvmk" [308d17ed-e32d-4b07-8849-850afbc75e0b] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 16.00342647s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32457
functional_test.go:1675: http://192.168.49.2:32457: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-spvmk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32457
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (16.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [fc43228b-c2f4-4dea-acb8-c2c2dc33b993] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004549616s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-847211 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-847211 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-847211 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-847211 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5215211a-8fab-4117-9242-7838bcafccdc] Pending
helpers_test.go:344: "sp-pod" [5215211a-8fab-4117-9242-7838bcafccdc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5215211a-8fab-4117-9242-7838bcafccdc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.003376851s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-847211 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-847211 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-847211 delete -f testdata/storage-provisioner/pod.yaml: (1.00576718s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-847211 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [198be039-1eef-48a4-8abf-1ebec903cfec] Pending
helpers_test.go:344: "sp-pod" [198be039-1eef-48a4-8abf-1ebec903cfec] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [198be039-1eef-48a4-8abf-1ebec903cfec] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.00335398s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-847211 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.75s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh -n functional-847211 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 cp functional-847211:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1893169398/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh -n functional-847211 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh -n functional-847211 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-847211 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-mtffx" [bf44c46d-53ba-48cb-bf05-d55beb15b47b] Pending
helpers_test.go:344: "mysql-6cdb49bbb-mtffx" [bf44c46d-53ba-48cb-bf05-d55beb15b47b] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-mtffx" [bf44c46d-53ba-48cb-bf05-d55beb15b47b] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.003766356s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-847211 exec mysql-6cdb49bbb-mtffx -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-847211 exec mysql-6cdb49bbb-mtffx -- mysql -ppassword -e "show databases;": exit status 1 (116.635289ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0919 18:55:48.107352   14476 retry.go:31] will retry after 786.359738ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-847211 exec mysql-6cdb49bbb-mtffx -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-847211 exec mysql-6cdb49bbb-mtffx -- mysql -ppassword -e "show databases;": exit status 1 (109.626415ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0919 18:55:49.004109   14476 retry.go:31] will retry after 1.480396305s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-847211 exec mysql-6cdb49bbb-mtffx -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-847211 exec mysql-6cdb49bbb-mtffx -- mysql -ppassword -e "show databases;": exit status 1 (100.641227ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0919 18:55:50.585490   14476 retry.go:31] will retry after 1.864933959s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-847211 exec mysql-6cdb49bbb-mtffx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.74s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/14476/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh "sudo cat /etc/test/nested/copy/14476/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/14476.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh "sudo cat /etc/ssl/certs/14476.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/14476.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh "sudo cat /usr/share/ca-certificates/14476.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/144762.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh "sudo cat /etc/ssl/certs/144762.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/144762.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh "sudo cat /usr/share/ca-certificates/144762.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-847211 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-847211 ssh "sudo systemctl is-active crio": exit status 1 (293.954077ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-847211 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-847211
docker.io/kicbase/echo-server:functional-847211
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-847211 image ls --format short --alsologtostderr:
I0919 18:55:59.832658   70972 out.go:345] Setting OutFile to fd 1 ...
I0919 18:55:59.832980   70972 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:55:59.832994   70972 out.go:358] Setting ErrFile to fd 2...
I0919 18:55:59.833003   70972 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:55:59.833301   70972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7708/.minikube/bin
I0919 18:55:59.834096   70972 config.go:182] Loaded profile config "functional-847211": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:55:59.834218   70972 config.go:182] Loaded profile config "functional-847211": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:55:59.834802   70972 cli_runner.go:164] Run: docker container inspect functional-847211 --format={{.State.Status}}
I0919 18:55:59.852263   70972 ssh_runner.go:195] Run: systemctl --version
I0919 18:55:59.852321   70972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-847211
I0919 18:55:59.871704   70972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/functional-847211/id_rsa Username:docker}
I0919 18:55:59.964655   70972 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-847211 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-847211 | e7aba9b830a1d | 30B    |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| docker.io/kicbase/echo-server               | functional-847211 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-847211 image ls --format table --alsologtostderr:
I0919 18:56:03.078930   72296 out.go:345] Setting OutFile to fd 1 ...
I0919 18:56:03.079042   72296 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:56:03.079053   72296 out.go:358] Setting ErrFile to fd 2...
I0919 18:56:03.079072   72296 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:56:03.079668   72296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7708/.minikube/bin
I0919 18:56:03.080506   72296 config.go:182] Loaded profile config "functional-847211": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:56:03.080657   72296 config.go:182] Loaded profile config "functional-847211": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:56:03.081196   72296 cli_runner.go:164] Run: docker container inspect functional-847211 --format={{.State.Status}}
I0919 18:56:03.100132   72296 ssh_runner.go:195] Run: systemctl --version
I0919 18:56:03.100186   72296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-847211
I0919 18:56:03.119153   72296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/functional-847211/id_rsa Username:docker}
I0919 18:56:03.215796   72296 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-847211 image ls --format json --alsologtostderr:
[{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-847211"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"e7aba9b830a1d3773fc1005005f866fee7a081e4a3f7edcb15d77020326419be","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-847211"],"size":"30"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5
b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.
k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-847211 image ls --format json --alsologtostderr:
I0919 18:56:02.774587   72240 out.go:345] Setting OutFile to fd 1 ...
I0919 18:56:02.774700   72240 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:56:02.774710   72240 out.go:358] Setting ErrFile to fd 2...
I0919 18:56:02.774715   72240 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:56:02.774921   72240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7708/.minikube/bin
I0919 18:56:02.775704   72240 config.go:182] Loaded profile config "functional-847211": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:56:02.775830   72240 config.go:182] Loaded profile config "functional-847211": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:56:02.776319   72240 cli_runner.go:164] Run: docker container inspect functional-847211 --format={{.State.Status}}
I0919 18:56:02.796132   72240 ssh_runner.go:195] Run: systemctl --version
I0919 18:56:02.796189   72240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-847211
I0919 18:56:02.818408   72240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/functional-847211/id_rsa Username:docker}
I0919 18:56:02.940045   72240 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-847211 image ls --format yaml --alsologtostderr:
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-847211
size: "4940000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: e7aba9b830a1d3773fc1005005f866fee7a081e4a3f7edcb15d77020326419be
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-847211
size: "30"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-847211 image ls --format yaml --alsologtostderr:
I0919 18:56:00.052514   71086 out.go:345] Setting OutFile to fd 1 ...
I0919 18:56:00.052626   71086 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:56:00.052637   71086 out.go:358] Setting ErrFile to fd 2...
I0919 18:56:00.052642   71086 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:56:00.052837   71086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7708/.minikube/bin
I0919 18:56:00.053414   71086 config.go:182] Loaded profile config "functional-847211": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:56:00.053513   71086 config.go:182] Loaded profile config "functional-847211": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:56:00.053892   71086 cli_runner.go:164] Run: docker container inspect functional-847211 --format={{.State.Status}}
I0919 18:56:00.070581   71086 ssh_runner.go:195] Run: systemctl --version
I0919 18:56:00.070626   71086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-847211
I0919 18:56:00.087921   71086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/functional-847211/id_rsa Username:docker}
I0919 18:56:00.220265   71086 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-847211 ssh pgrep buildkitd: exit status 1 (341.072201ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 image build -t localhost/my-image:functional-847211 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-847211 image build -t localhost/my-image:functional-847211 testdata/build --alsologtostderr: (3.135189245s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-847211 image build -t localhost/my-image:functional-847211 testdata/build --alsologtostderr:
I0919 18:56:00.676599   71354 out.go:345] Setting OutFile to fd 1 ...
I0919 18:56:00.676899   71354 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:56:00.676908   71354 out.go:358] Setting ErrFile to fd 2...
I0919 18:56:00.676913   71354 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 18:56:00.677097   71354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7708/.minikube/bin
I0919 18:56:00.677841   71354 config.go:182] Loaded profile config "functional-847211": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:56:00.678580   71354 config.go:182] Loaded profile config "functional-847211": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0919 18:56:00.679220   71354 cli_runner.go:164] Run: docker container inspect functional-847211 --format={{.State.Status}}
I0919 18:56:00.700128   71354 ssh_runner.go:195] Run: systemctl --version
I0919 18:56:00.700192   71354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-847211
I0919 18:56:00.724458   71354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/functional-847211/id_rsa Username:docker}
I0919 18:56:00.820013   71354 build_images.go:161] Building image from path: /tmp/build.100093962.tar
I0919 18:56:00.820116   71354 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0919 18:56:00.828976   71354 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.100093962.tar
I0919 18:56:00.832738   71354 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.100093962.tar: stat -c "%s %y" /var/lib/minikube/build/build.100093962.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.100093962.tar': No such file or directory
I0919 18:56:00.832765   71354 ssh_runner.go:362] scp /tmp/build.100093962.tar --> /var/lib/minikube/build/build.100093962.tar (3072 bytes)
I0919 18:56:00.886815   71354 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.100093962
I0919 18:56:00.896522   71354 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.100093962 -xf /var/lib/minikube/build/build.100093962.tar
I0919 18:56:00.906282   71354 docker.go:360] Building image: /var/lib/minikube/build/build.100093962
I0919 18:56:00.906351   71354 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-847211 /var/lib/minikube/build/build.100093962
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.0s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:6d2e677545fb7bb75f817385abb6025b7e002d922a7b1a8147240016c6f43c6b done
#8 naming to localhost/my-image:functional-847211 done
#8 DONE 0.0s
I0919 18:56:03.738567   71354 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-847211 /var/lib/minikube/build/build.100093962: (2.832192551s)
I0919 18:56:03.738616   71354 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.100093962
I0919 18:56:03.749783   71354 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.100093962.tar
I0919 18:56:03.758288   71354 build_images.go:217] Built localhost/my-image:functional-847211 from /tmp/build.100093962.tar
I0919 18:56:03.758313   71354 build_images.go:133] succeeded building to: functional-847211
I0919 18:56:03.758318   71354 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 image ls
2024/09/19 18:56:05 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-847211
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 image load --daemon kicbase/echo-server:functional-847211 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-847211 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-847211 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-847211 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-847211 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 63650: os: process already finished
helpers_test.go:502: unable to terminate pid 63321: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-847211 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-847211 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5b5bfe70-1105-46a1-8dc8-145175af524a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5b5bfe70-1105-46a1-8dc8-145175af524a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003580217s
I0919 18:55:39.211656   14476 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 image load --daemon kicbase/echo-server:functional-847211 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-847211
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 image load --daemon kicbase/echo-server:functional-847211 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 image save kicbase/echo-server:functional-847211 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 image rm kicbase/echo-server:functional-847211 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-847211
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 image save --daemon kicbase/echo-server:functional-847211 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-847211
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-847211 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.54.137 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-847211 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (16.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-847211 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-847211 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-xn7v6" [13c9b493-1d98-4cd7-a822-00e2159da799] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-xn7v6" [13c9b493-1d98-4cd7-a822-00e2159da799] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 16.003339534s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (16.15s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-847211 docker-env) && out/minikube-linux-amd64 status -p functional-847211"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-847211 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "326.367963ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "51.675761ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-847211 /tmp/TestFunctionalparallelMountCmdany-port3433423965/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726772152558695540" to /tmp/TestFunctionalparallelMountCmdany-port3433423965/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726772152558695540" to /tmp/TestFunctionalparallelMountCmdany-port3433423965/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726772152558695540" to /tmp/TestFunctionalparallelMountCmdany-port3433423965/001/test-1726772152558695540
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-847211 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (268.524417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 18:55:52.827486   14476 retry.go:31] will retry after 652.332123ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 19 18:55 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 19 18:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 19 18:55 test-1726772152558695540
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh cat /mount-9p/test-1726772152558695540
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-847211 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0b825e03-288a-48a9-a927-0cccaf627166] Pending
helpers_test.go:344: "busybox-mount" [0b825e03-288a-48a9-a927-0cccaf627166] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0b825e03-288a-48a9-a927-0cccaf627166] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0b825e03-288a-48a9-a927-0cccaf627166] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004238673s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-847211 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-847211 /tmp/TestFunctionalparallelMountCmdany-port3433423965/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.70s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "297.276125ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "42.425968ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-linux-amd64 -p functional-847211 service list -o json: (1.00465797s)
functional_test.go:1494: Took "1.004759242s" to run "out/minikube-linux-amd64 -p functional-847211 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31544
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31544
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-847211 /tmp/TestFunctionalparallelMountCmdspecific-port2188536398/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-847211 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (266.148987ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 18:55:59.524405   14476 retry.go:31] will retry after 349.933019ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-847211 /tmp/TestFunctionalparallelMountCmdspecific-port2188536398/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-847211 ssh "sudo umount -f /mount-9p": exit status 1 (286.574129ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-847211 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-847211 /tmp/TestFunctionalparallelMountCmdspecific-port2188536398/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-847211 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1124044669/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-847211 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1124044669/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-847211 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1124044669/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-847211 ssh "findmnt -T" /mount1: exit status 1 (379.464353ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 18:56:01.373590   14476 retry.go:31] will retry after 504.011377ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-847211 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-847211 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-847211 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1124044669/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-847211 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1124044669/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-847211 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1124044669/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-847211
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-847211
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-847211
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (99.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-639649 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0919 18:57:25.546375   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:57:53.249762   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-639649 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m38.828678814s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (99.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639649 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639649 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-639649 -- rollout status deployment/busybox: (2.632961235s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639649 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639649 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639649 -- exec busybox-7dff88458-x86sd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639649 -- exec busybox-7dff88458-xbstl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639649 -- exec busybox-7dff88458-z2ckr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639649 -- exec busybox-7dff88458-x86sd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639649 -- exec busybox-7dff88458-xbstl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639649 -- exec busybox-7dff88458-z2ckr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639649 -- exec busybox-7dff88458-x86sd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639649 -- exec busybox-7dff88458-xbstl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639649 -- exec busybox-7dff88458-z2ckr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639649 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639649 -- exec busybox-7dff88458-x86sd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639649 -- exec busybox-7dff88458-x86sd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639649 -- exec busybox-7dff88458-xbstl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639649 -- exec busybox-7dff88458-xbstl -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639649 -- exec busybox-7dff88458-z2ckr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639649 -- exec busybox-7dff88458-z2ckr -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-639649 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-639649 -v=7 --alsologtostderr: (19.87069466s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-639649 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 cp testdata/cp-test.txt ha-639649:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 cp ha-639649:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3146692346/001/cp-test_ha-639649.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 cp ha-639649:/home/docker/cp-test.txt ha-639649-m02:/home/docker/cp-test_ha-639649_ha-639649-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m02 "sudo cat /home/docker/cp-test_ha-639649_ha-639649-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 cp ha-639649:/home/docker/cp-test.txt ha-639649-m03:/home/docker/cp-test_ha-639649_ha-639649-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m03 "sudo cat /home/docker/cp-test_ha-639649_ha-639649-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 cp ha-639649:/home/docker/cp-test.txt ha-639649-m04:/home/docker/cp-test_ha-639649_ha-639649-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m04 "sudo cat /home/docker/cp-test_ha-639649_ha-639649-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 cp testdata/cp-test.txt ha-639649-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 cp ha-639649-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3146692346/001/cp-test_ha-639649-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 cp ha-639649-m02:/home/docker/cp-test.txt ha-639649:/home/docker/cp-test_ha-639649-m02_ha-639649.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649 "sudo cat /home/docker/cp-test_ha-639649-m02_ha-639649.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 cp ha-639649-m02:/home/docker/cp-test.txt ha-639649-m03:/home/docker/cp-test_ha-639649-m02_ha-639649-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m03 "sudo cat /home/docker/cp-test_ha-639649-m02_ha-639649-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 cp ha-639649-m02:/home/docker/cp-test.txt ha-639649-m04:/home/docker/cp-test_ha-639649-m02_ha-639649-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m04 "sudo cat /home/docker/cp-test_ha-639649-m02_ha-639649-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 cp testdata/cp-test.txt ha-639649-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 cp ha-639649-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3146692346/001/cp-test_ha-639649-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 cp ha-639649-m03:/home/docker/cp-test.txt ha-639649:/home/docker/cp-test_ha-639649-m03_ha-639649.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649 "sudo cat /home/docker/cp-test_ha-639649-m03_ha-639649.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 cp ha-639649-m03:/home/docker/cp-test.txt ha-639649-m02:/home/docker/cp-test_ha-639649-m03_ha-639649-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m02 "sudo cat /home/docker/cp-test_ha-639649-m03_ha-639649-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 cp ha-639649-m03:/home/docker/cp-test.txt ha-639649-m04:/home/docker/cp-test_ha-639649-m03_ha-639649-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m04 "sudo cat /home/docker/cp-test_ha-639649-m03_ha-639649-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 cp testdata/cp-test.txt ha-639649-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 cp ha-639649-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3146692346/001/cp-test_ha-639649-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 cp ha-639649-m04:/home/docker/cp-test.txt ha-639649:/home/docker/cp-test_ha-639649-m04_ha-639649.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649 "sudo cat /home/docker/cp-test_ha-639649-m04_ha-639649.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 cp ha-639649-m04:/home/docker/cp-test.txt ha-639649-m02:/home/docker/cp-test_ha-639649-m04_ha-639649-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m02 "sudo cat /home/docker/cp-test_ha-639649-m04_ha-639649-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 cp ha-639649-m04:/home/docker/cp-test.txt ha-639649-m03:/home/docker/cp-test_ha-639649-m04_ha-639649-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 ssh -n ha-639649-m03 "sudo cat /home/docker/cp-test_ha-639649-m04_ha-639649-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-639649 node stop m02 -v=7 --alsologtostderr: (10.61492234s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-639649 status -v=7 --alsologtostderr: exit status 7 (621.439493ms)

                                                
                                                
-- stdout --
	ha-639649
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-639649-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-639649-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-639649-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 18:58:46.678116   99622 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:58:46.678232   99622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:58:46.678242   99622 out.go:358] Setting ErrFile to fd 2...
	I0919 18:58:46.678248   99622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:58:46.678434   99622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7708/.minikube/bin
	I0919 18:58:46.678629   99622 out.go:352] Setting JSON to false
	I0919 18:58:46.678662   99622 mustload.go:65] Loading cluster: ha-639649
	I0919 18:58:46.678753   99622 notify.go:220] Checking for updates...
	I0919 18:58:46.679196   99622 config.go:182] Loaded profile config "ha-639649": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 18:58:46.679223   99622 status.go:174] checking status of ha-639649 ...
	I0919 18:58:46.679659   99622 cli_runner.go:164] Run: docker container inspect ha-639649 --format={{.State.Status}}
	I0919 18:58:46.696628   99622 status.go:364] ha-639649 host status = "Running" (err=<nil>)
	I0919 18:58:46.696650   99622 host.go:66] Checking if "ha-639649" exists ...
	I0919 18:58:46.696862   99622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-639649
	I0919 18:58:46.713068   99622 host.go:66] Checking if "ha-639649" exists ...
	I0919 18:58:46.713280   99622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 18:58:46.713313   99622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-639649
	I0919 18:58:46.728802   99622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/ha-639649/id_rsa Username:docker}
	I0919 18:58:46.819551   99622 ssh_runner.go:195] Run: systemctl --version
	I0919 18:58:46.823163   99622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:58:46.833092   99622 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:58:46.878596   99622 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-19 18:58:46.869791769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 18:58:46.879167   99622 kubeconfig.go:125] found "ha-639649" server: "https://192.168.49.254:8443"
	I0919 18:58:46.879199   99622 api_server.go:166] Checking apiserver status ...
	I0919 18:58:46.879231   99622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:58:46.889818   99622 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2413/cgroup
	I0919 18:58:46.898268   99622 api_server.go:182] apiserver freezer: "4:freezer:/docker/7cacb69740cf2f0fd645bfc8e261c11d8ae4c0fd9aae98245cbd7675ddfb040d/kubepods/burstable/pod4f801496cb14162ab27d5c61264051b3/01d747bcd14397d8bc6fe3cefc0389e7841679053ad4a8b061a4f8824e932ba9"
	I0919 18:58:46.898336   99622 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7cacb69740cf2f0fd645bfc8e261c11d8ae4c0fd9aae98245cbd7675ddfb040d/kubepods/burstable/pod4f801496cb14162ab27d5c61264051b3/01d747bcd14397d8bc6fe3cefc0389e7841679053ad4a8b061a4f8824e932ba9/freezer.state
	I0919 18:58:46.906023   99622 api_server.go:204] freezer state: "THAWED"
	I0919 18:58:46.906042   99622 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 18:58:46.909589   99622 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 18:58:46.909619   99622 status.go:456] ha-639649 apiserver status = Running (err=<nil>)
	I0919 18:58:46.909632   99622 status.go:176] ha-639649 status: &{Name:ha-639649 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 18:58:46.909654   99622 status.go:174] checking status of ha-639649-m02 ...
	I0919 18:58:46.909912   99622 cli_runner.go:164] Run: docker container inspect ha-639649-m02 --format={{.State.Status}}
	I0919 18:58:46.926181   99622 status.go:364] ha-639649-m02 host status = "Stopped" (err=<nil>)
	I0919 18:58:46.926196   99622 status.go:377] host is not running, skipping remaining checks
	I0919 18:58:46.926202   99622 status.go:176] ha-639649-m02 status: &{Name:ha-639649-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 18:58:46.926218   99622 status.go:174] checking status of ha-639649-m03 ...
	I0919 18:58:46.926516   99622 cli_runner.go:164] Run: docker container inspect ha-639649-m03 --format={{.State.Status}}
	I0919 18:58:46.944283   99622 status.go:364] ha-639649-m03 host status = "Running" (err=<nil>)
	I0919 18:58:46.944301   99622 host.go:66] Checking if "ha-639649-m03" exists ...
	I0919 18:58:46.944509   99622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-639649-m03
	I0919 18:58:46.959786   99622 host.go:66] Checking if "ha-639649-m03" exists ...
	I0919 18:58:46.960052   99622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 18:58:46.960097   99622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-639649-m03
	I0919 18:58:46.977001   99622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32794 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/ha-639649-m03/id_rsa Username:docker}
	I0919 18:58:47.067679   99622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:58:47.077795   99622 kubeconfig.go:125] found "ha-639649" server: "https://192.168.49.254:8443"
	I0919 18:58:47.077818   99622 api_server.go:166] Checking apiserver status ...
	I0919 18:58:47.077860   99622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:58:47.087901   99622 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2289/cgroup
	I0919 18:58:47.095909   99622 api_server.go:182] apiserver freezer: "4:freezer:/docker/874688c2b3e32f0023f4bf5b6b0fdadcbe7d58bc3f5b46063e3679985c7b5f96/kubepods/burstable/pod45834aa813a30a940258b4ced37f8e61/117efa6af016f4c9eb9a0caa905155379ab63613e1fc9a3c4fbe94fbe65cceb9"
	I0919 18:58:47.095967   99622 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/874688c2b3e32f0023f4bf5b6b0fdadcbe7d58bc3f5b46063e3679985c7b5f96/kubepods/burstable/pod45834aa813a30a940258b4ced37f8e61/117efa6af016f4c9eb9a0caa905155379ab63613e1fc9a3c4fbe94fbe65cceb9/freezer.state
	I0919 18:58:47.103191   99622 api_server.go:204] freezer state: "THAWED"
	I0919 18:58:47.103215   99622 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 18:58:47.106817   99622 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 18:58:47.106843   99622 status.go:456] ha-639649-m03 apiserver status = Running (err=<nil>)
	I0919 18:58:47.106852   99622 status.go:176] ha-639649-m03 status: &{Name:ha-639649-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 18:58:47.106866   99622 status.go:174] checking status of ha-639649-m04 ...
	I0919 18:58:47.107148   99622 cli_runner.go:164] Run: docker container inspect ha-639649-m04 --format={{.State.Status}}
	I0919 18:58:47.123638   99622 status.go:364] ha-639649-m04 host status = "Running" (err=<nil>)
	I0919 18:58:47.123659   99622 host.go:66] Checking if "ha-639649-m04" exists ...
	I0919 18:58:47.123887   99622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-639649-m04
	I0919 18:58:47.140396   99622 host.go:66] Checking if "ha-639649-m04" exists ...
	I0919 18:58:47.140653   99622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 18:58:47.140691   99622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-639649-m04
	I0919 18:58:47.156344   99622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32799 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/ha-639649-m04/id_rsa Username:docker}
	I0919 18:58:47.247439   99622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:58:47.257505   99622 status.go:176] ha-639649-m04 status: &{Name:ha-639649-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (34.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-639649 node start m02 -v=7 --alsologtostderr: (33.886885105s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (34.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (212.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-639649 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-639649 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-639649 -v=7 --alsologtostderr: (33.345977205s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-639649 --wait=true -v=7 --alsologtostderr
E0919 19:00:29.939022   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:00:29.945395   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:00:29.956728   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:00:29.978051   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:00:30.019391   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:00:30.100814   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:00:30.262325   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:00:30.583986   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:00:31.225982   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:00:32.508220   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:00:35.070492   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:00:40.191765   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:00:50.433807   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:01:10.916007   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:01:51.877326   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:02:25.546548   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-639649 --wait=true -v=7 --alsologtostderr: (2m59.529322784s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-639649
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (212.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-639649 node delete m03 -v=7 --alsologtostderr: (8.462421403s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 stop -v=7 --alsologtostderr
E0919 19:03:13.799887   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-639649 stop -v=7 --alsologtostderr: (32.495724767s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-639649 status -v=7 --alsologtostderr: exit status 7 (90.043934ms)

                                                
                                                
-- stdout --
	ha-639649
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-639649-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-639649-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:03:38.941549  129831 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:03:38.941651  129831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:03:38.941660  129831 out.go:358] Setting ErrFile to fd 2...
	I0919 19:03:38.941664  129831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:03:38.941837  129831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7708/.minikube/bin
	I0919 19:03:38.941981  129831 out.go:352] Setting JSON to false
	I0919 19:03:38.942006  129831 mustload.go:65] Loading cluster: ha-639649
	I0919 19:03:38.942061  129831 notify.go:220] Checking for updates...
	I0919 19:03:38.942480  129831 config.go:182] Loaded profile config "ha-639649": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 19:03:38.942504  129831 status.go:174] checking status of ha-639649 ...
	I0919 19:03:38.942969  129831 cli_runner.go:164] Run: docker container inspect ha-639649 --format={{.State.Status}}
	I0919 19:03:38.959621  129831 status.go:364] ha-639649 host status = "Stopped" (err=<nil>)
	I0919 19:03:38.959638  129831 status.go:377] host is not running, skipping remaining checks
	I0919 19:03:38.959644  129831 status.go:176] ha-639649 status: &{Name:ha-639649 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:03:38.959664  129831 status.go:174] checking status of ha-639649-m02 ...
	I0919 19:03:38.959861  129831 cli_runner.go:164] Run: docker container inspect ha-639649-m02 --format={{.State.Status}}
	I0919 19:03:38.974987  129831 status.go:364] ha-639649-m02 host status = "Stopped" (err=<nil>)
	I0919 19:03:38.975003  129831 status.go:377] host is not running, skipping remaining checks
	I0919 19:03:38.975008  129831 status.go:176] ha-639649-m02 status: &{Name:ha-639649-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:03:38.975023  129831 status.go:174] checking status of ha-639649-m04 ...
	I0919 19:03:38.975267  129831 cli_runner.go:164] Run: docker container inspect ha-639649-m04 --format={{.State.Status}}
	I0919 19:03:38.990935  129831 status.go:364] ha-639649-m04 host status = "Stopped" (err=<nil>)
	I0919 19:03:38.990963  129831 status.go:377] host is not running, skipping remaining checks
	I0919 19:03:38.990976  129831 status.go:176] ha-639649-m04 status: &{Name:ha-639649-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (81.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-639649 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-639649 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m20.535351065s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (81.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (34.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-639649 --control-plane -v=7 --alsologtostderr
E0919 19:05:29.936025   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-639649 --control-plane -v=7 --alsologtostderr: (34.132905054s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-639649 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (34.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (20.5s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-124947 --driver=docker  --container-runtime=docker
E0919 19:05:57.642068   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-124947 --driver=docker  --container-runtime=docker: (20.497002361s)
--- PASS: TestImageBuild/serial/Setup (20.50s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.19s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-124947
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-124947: (1.190125943s)
--- PASS: TestImageBuild/serial/NormalBuild (1.19s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.71s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-124947
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.71s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.53s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-124947
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.53s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.53s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-124947
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (34.51s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-142592 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-142592 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (34.507122579s)
--- PASS: TestJSONOutput/start/Command (34.51s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.49s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-142592 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.49s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-142592 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-142592 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-142592 --output=json --user=testUser: (10.813647845s)
--- PASS: TestJSONOutput/stop/Command (10.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-529971 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-529971 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.933066ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1200f57d-05b7-4437-aa15-dfbe8ae6a0a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-529971] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b1d5fbd4-902c-40e7-bcbb-3f00bef33b34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19664"}}
	{"specversion":"1.0","id":"8d7d761e-aad4-40ac-9adc-478bc57c7bcf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7bd58512-5fd4-447e-82e7-920b36e1c6ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19664-7708/kubeconfig"}}
	{"specversion":"1.0","id":"c2491ae7-0b70-40c5-b623-f8e47da0199a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7708/.minikube"}}
	{"specversion":"1.0","id":"d50e64df-3378-4c87-9b37-ff447a7b6cbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"37cbc491-e9a7-4296-a445-ac94f1cfc0cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ce507ea6-1e7d-4872-befb-9615cdd5a7ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-529971" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-529971
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (22.21s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-152439 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-152439 --network=: (20.182753567s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-152439" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-152439
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-152439: (2.00574408s)
--- PASS: TestKicCustomNetwork/create_custom_network (22.21s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.44s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-194164 --network=bridge
E0919 19:07:25.547279   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-194164 --network=bridge: (20.618545934s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-194164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-194164
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-194164: (1.801982962s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.44s)

                                                
                                    
x
+
TestKicExistingNetwork (24.7s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0919 19:07:42.294903   14476 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0919 19:07:42.310164   14476 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0919 19:07:42.310222   14476 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0919 19:07:42.310245   14476 cli_runner.go:164] Run: docker network inspect existing-network
W0919 19:07:42.325513   14476 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0919 19:07:42.325556   14476 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0919 19:07:42.325575   14476 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0919 19:07:42.325722   14476 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0919 19:07:42.341076   14476 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f765d4ef3abb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:32:d2:b4:93} reservation:<nil>}
I0919 19:07:42.341490   14476 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00061e640}
I0919 19:07:42.341513   14476 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0919 19:07:42.341558   14476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0919 19:07:42.400626   14476 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-644828 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-644828 --network=existing-network: (22.711106921s)
helpers_test.go:175: Cleaning up "existing-network-644828" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-644828
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-644828: (1.849086175s)
I0919 19:08:06.977373   14476 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.70s)

                                                
                                    
x
+
TestKicCustomSubnet (23.99s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-286795 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-286795 --subnet=192.168.60.0/24: (22.044085798s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-286795 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-286795" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-286795
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-286795: (1.925428161s)
--- PASS: TestKicCustomSubnet (23.99s)

                                                
                                    
x
+
TestKicStaticIP (26.03s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-130923 --static-ip=192.168.200.200
E0919 19:08:48.611229   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-130923 --static-ip=192.168.200.200: (23.996907294s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-130923 ip
helpers_test.go:175: Cleaning up "static-ip-130923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-130923
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-130923: (1.922944439s)
--- PASS: TestKicStaticIP (26.03s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (50.22s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-960761 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-960761 --driver=docker  --container-runtime=docker: (20.715433454s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-969950 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-969950 --driver=docker  --container-runtime=docker: (24.37240077s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-960761
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-969950
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-969950" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-969950
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-969950: (2.009463479s)
helpers_test.go:175: Cleaning up "first-960761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-960761
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-960761: (2.036657768s)
--- PASS: TestMinikubeProfile (50.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-955527 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-955527 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.458031345s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-955527 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-966236 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-966236 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.157337981s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-966236 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.45s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-955527 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-955527 --alsologtostderr -v=5: (1.454270659s)
--- PASS: TestMountStart/serial/DeleteFirst (1.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-966236 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-966236
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-966236: (1.166338035s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.67s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-966236
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-966236: (6.670870529s)
--- PASS: TestMountStart/serial/RestartStopped (7.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-966236 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (56.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-635400 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0919 19:10:29.936381   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-635400 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (56.55137065s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (56.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (47.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-635400 -- rollout status deployment/busybox: (2.728594793s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 19:11:12.676122   14476 retry.go:31] will retry after 790.629015ms: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 19:11:13.571548   14476 retry.go:31] will retry after 1.61014953s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 19:11:15.289865   14476 retry.go:31] will retry after 1.773925616s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 19:11:17.173512   14476 retry.go:31] will retry after 2.329532741s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 19:11:19.610878   14476 retry.go:31] will retry after 7.059249196s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 19:11:26.774115   14476 retry.go:31] will retry after 9.352841958s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 19:11:36.230520   14476 retry.go:31] will retry after 5.807142392s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 19:11:42.144826   14476 retry.go:31] will retry after 13.766922597s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- exec busybox-7dff88458-456mz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- exec busybox-7dff88458-qb525 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- exec busybox-7dff88458-456mz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- exec busybox-7dff88458-qb525 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- exec busybox-7dff88458-456mz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- exec busybox-7dff88458-qb525 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (47.34s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- exec busybox-7dff88458-456mz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- exec busybox-7dff88458-456mz -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- exec busybox-7dff88458-qb525 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-635400 -- exec busybox-7dff88458-qb525 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-635400 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-635400 -v 3 --alsologtostderr: (17.922431321s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.51s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-635400 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 cp testdata/cp-test.txt multinode-635400:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 ssh -n multinode-635400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 cp multinode-635400:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2441827694/001/cp-test_multinode-635400.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 ssh -n multinode-635400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 cp multinode-635400:/home/docker/cp-test.txt multinode-635400-m02:/home/docker/cp-test_multinode-635400_multinode-635400-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 ssh -n multinode-635400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 ssh -n multinode-635400-m02 "sudo cat /home/docker/cp-test_multinode-635400_multinode-635400-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 cp multinode-635400:/home/docker/cp-test.txt multinode-635400-m03:/home/docker/cp-test_multinode-635400_multinode-635400-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 ssh -n multinode-635400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 ssh -n multinode-635400-m03 "sudo cat /home/docker/cp-test_multinode-635400_multinode-635400-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 cp testdata/cp-test.txt multinode-635400-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 ssh -n multinode-635400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 cp multinode-635400-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2441827694/001/cp-test_multinode-635400-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 ssh -n multinode-635400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 cp multinode-635400-m02:/home/docker/cp-test.txt multinode-635400:/home/docker/cp-test_multinode-635400-m02_multinode-635400.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 ssh -n multinode-635400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 ssh -n multinode-635400 "sudo cat /home/docker/cp-test_multinode-635400-m02_multinode-635400.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 cp multinode-635400-m02:/home/docker/cp-test.txt multinode-635400-m03:/home/docker/cp-test_multinode-635400-m02_multinode-635400-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 ssh -n multinode-635400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 ssh -n multinode-635400-m03 "sudo cat /home/docker/cp-test_multinode-635400-m02_multinode-635400-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 cp testdata/cp-test.txt multinode-635400-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 ssh -n multinode-635400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 cp multinode-635400-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2441827694/001/cp-test_multinode-635400-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 ssh -n multinode-635400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 cp multinode-635400-m03:/home/docker/cp-test.txt multinode-635400:/home/docker/cp-test_multinode-635400-m03_multinode-635400.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 ssh -n multinode-635400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 ssh -n multinode-635400 "sudo cat /home/docker/cp-test_multinode-635400-m03_multinode-635400.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 cp multinode-635400-m03:/home/docker/cp-test.txt multinode-635400-m02:/home/docker/cp-test_multinode-635400-m03_multinode-635400-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 ssh -n multinode-635400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 ssh -n multinode-635400-m02 "sudo cat /home/docker/cp-test_multinode-635400-m03_multinode-635400-m02.txt"
E0919 19:12:25.546064   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiNode/serial/CopyFile (8.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-635400 node stop m03: (1.163892505s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-635400 status: exit status 7 (439.498555ms)

                                                
                                                
-- stdout --
	multinode-635400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-635400-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-635400-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-635400 status --alsologtostderr: exit status 7 (447.602234ms)

                                                
                                                
-- stdout --
	multinode-635400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-635400-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-635400-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:12:27.237899  215633 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:12:27.238160  215633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:12:27.238170  215633 out.go:358] Setting ErrFile to fd 2...
	I0919 19:12:27.238174  215633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:12:27.238349  215633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7708/.minikube/bin
	I0919 19:12:27.238518  215633 out.go:352] Setting JSON to false
	I0919 19:12:27.238548  215633 mustload.go:65] Loading cluster: multinode-635400
	I0919 19:12:27.238595  215633 notify.go:220] Checking for updates...
	I0919 19:12:27.238942  215633 config.go:182] Loaded profile config "multinode-635400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 19:12:27.238961  215633 status.go:174] checking status of multinode-635400 ...
	I0919 19:12:27.239377  215633 cli_runner.go:164] Run: docker container inspect multinode-635400 --format={{.State.Status}}
	I0919 19:12:27.259701  215633 status.go:364] multinode-635400 host status = "Running" (err=<nil>)
	I0919 19:12:27.259739  215633 host.go:66] Checking if "multinode-635400" exists ...
	I0919 19:12:27.259992  215633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-635400
	I0919 19:12:27.275963  215633 host.go:66] Checking if "multinode-635400" exists ...
	I0919 19:12:27.276167  215633 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 19:12:27.276213  215633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-635400
	I0919 19:12:27.292375  215633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32911 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/multinode-635400/id_rsa Username:docker}
	I0919 19:12:27.387727  215633 ssh_runner.go:195] Run: systemctl --version
	I0919 19:12:27.391755  215633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:12:27.401500  215633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 19:12:27.447724  215633 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-19 19:12:27.438240185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 19:12:27.448309  215633 kubeconfig.go:125] found "multinode-635400" server: "https://192.168.67.2:8443"
	I0919 19:12:27.448344  215633 api_server.go:166] Checking apiserver status ...
	I0919 19:12:27.448383  215633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 19:12:27.458750  215633 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2375/cgroup
	I0919 19:12:27.466914  215633 api_server.go:182] apiserver freezer: "4:freezer:/docker/88b666abfb766528c02dfadbf6bbd3a244c7f5bf4c311639b391a33d3813cfb4/kubepods/burstable/podf63bb8dcece707d9ec5300cfae39bc8d/edecdd8615ec2eaa716b92007bed6ac3f6003ce81110ebb62da8a5f4674e3cac"
	I0919 19:12:27.466962  215633 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/88b666abfb766528c02dfadbf6bbd3a244c7f5bf4c311639b391a33d3813cfb4/kubepods/burstable/podf63bb8dcece707d9ec5300cfae39bc8d/edecdd8615ec2eaa716b92007bed6ac3f6003ce81110ebb62da8a5f4674e3cac/freezer.state
	I0919 19:12:27.474331  215633 api_server.go:204] freezer state: "THAWED"
	I0919 19:12:27.474357  215633 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0919 19:12:27.477883  215633 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0919 19:12:27.477906  215633 status.go:456] multinode-635400 apiserver status = Running (err=<nil>)
	I0919 19:12:27.477916  215633 status.go:176] multinode-635400 status: &{Name:multinode-635400 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:12:27.477942  215633 status.go:174] checking status of multinode-635400-m02 ...
	I0919 19:12:27.478161  215633 cli_runner.go:164] Run: docker container inspect multinode-635400-m02 --format={{.State.Status}}
	I0919 19:12:27.494531  215633 status.go:364] multinode-635400-m02 host status = "Running" (err=<nil>)
	I0919 19:12:27.494547  215633 host.go:66] Checking if "multinode-635400-m02" exists ...
	I0919 19:12:27.494770  215633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-635400-m02
	I0919 19:12:27.510060  215633 host.go:66] Checking if "multinode-635400-m02" exists ...
	I0919 19:12:27.510301  215633 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 19:12:27.510331  215633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-635400-m02
	I0919 19:12:27.525964  215633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32916 SSHKeyPath:/home/jenkins/minikube-integration/19664-7708/.minikube/machines/multinode-635400-m02/id_rsa Username:docker}
	I0919 19:12:27.615532  215633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:12:27.625524  215633 status.go:176] multinode-635400-m02 status: &{Name:multinode-635400-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:12:27.625559  215633 status.go:174] checking status of multinode-635400-m03 ...
	I0919 19:12:27.625822  215633 cli_runner.go:164] Run: docker container inspect multinode-635400-m03 --format={{.State.Status}}
	I0919 19:12:27.642798  215633 status.go:364] multinode-635400-m03 host status = "Stopped" (err=<nil>)
	I0919 19:12:27.642818  215633 status.go:377] host is not running, skipping remaining checks
	I0919 19:12:27.642825  215633 status.go:176] multinode-635400-m03 status: &{Name:multinode-635400-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.05s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-635400 node start m03 -v=7 --alsologtostderr: (8.870559738s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (99.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-635400
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-635400
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-635400: (22.397960986s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-635400 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-635400 --wait=true -v=8 --alsologtostderr: (1m16.821851643s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-635400
--- PASS: TestMultiNode/serial/RestartKeepsNodes (99.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-635400 node delete m03: (4.592398406s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-635400 stop: (21.226246326s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-635400 status: exit status 7 (79.227976ms)

                                                
                                                
-- stdout --
	multinode-635400
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-635400-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-635400 status --alsologtostderr: exit status 7 (76.014351ms)

                                                
                                                
-- stdout --
	multinode-635400
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-635400-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:14:42.936258  230960 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:14:42.936515  230960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:14:42.936525  230960 out.go:358] Setting ErrFile to fd 2...
	I0919 19:14:42.936529  230960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:14:42.936708  230960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7708/.minikube/bin
	I0919 19:14:42.936881  230960 out.go:352] Setting JSON to false
	I0919 19:14:42.936908  230960 mustload.go:65] Loading cluster: multinode-635400
	I0919 19:14:42.937030  230960 notify.go:220] Checking for updates...
	I0919 19:14:42.937462  230960 config.go:182] Loaded profile config "multinode-635400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 19:14:42.937490  230960 status.go:174] checking status of multinode-635400 ...
	I0919 19:14:42.938032  230960 cli_runner.go:164] Run: docker container inspect multinode-635400 --format={{.State.Status}}
	I0919 19:14:42.954427  230960 status.go:364] multinode-635400 host status = "Stopped" (err=<nil>)
	I0919 19:14:42.954459  230960 status.go:377] host is not running, skipping remaining checks
	I0919 19:14:42.954468  230960 status.go:176] multinode-635400 status: &{Name:multinode-635400 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:14:42.954511  230960 status.go:174] checking status of multinode-635400-m02 ...
	I0919 19:14:42.954756  230960 cli_runner.go:164] Run: docker container inspect multinode-635400-m02 --format={{.State.Status}}
	I0919 19:14:42.970939  230960 status.go:364] multinode-635400-m02 host status = "Stopped" (err=<nil>)
	I0919 19:14:42.970956  230960 status.go:377] host is not running, skipping remaining checks
	I0919 19:14:42.970961  230960 status.go:176] multinode-635400-m02 status: &{Name:multinode-635400-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-635400 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0919 19:15:29.936276   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-635400 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (53.00149156s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-635400 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.55s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-635400
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-635400-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-635400-m02 --driver=docker  --container-runtime=docker: exit status 14 (64.875854ms)

                                                
                                                
-- stdout --
	* [multinode-635400-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-7708/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7708/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-635400-m02' is duplicated with machine name 'multinode-635400-m02' in profile 'multinode-635400'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-635400-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-635400-m03 --driver=docker  --container-runtime=docker: (22.235688327s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-635400
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-635400: exit status 80 (251.635095ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-635400 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-635400-m03 already exists in multinode-635400-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-635400-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-635400-m03: (2.010815815s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.61s)

                                                
                                    
x
+
TestPreload (93.86s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-904478 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0919 19:16:53.004085   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-904478 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m2.304295448s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-904478 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-904478 image pull gcr.io/k8s-minikube/busybox: (1.28521667s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-904478
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-904478: (10.64366838s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-904478 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E0919 19:17:25.546758   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-904478 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (17.263743293s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-904478 image list
helpers_test.go:175: Cleaning up "test-preload-904478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-904478
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-904478: (2.138032431s)
--- PASS: TestPreload (93.86s)

                                                
                                    
x
+
TestScheduledStopUnix (94.73s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-357005 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-357005 --memory=2048 --driver=docker  --container-runtime=docker: (21.9319884s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-357005 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-357005 -n scheduled-stop-357005
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-357005 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0919 19:18:01.036804   14476 retry.go:31] will retry after 121.35µs: open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/scheduled-stop-357005/pid: no such file or directory
I0919 19:18:01.036996   14476 retry.go:31] will retry after 222.086µs: open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/scheduled-stop-357005/pid: no such file or directory
I0919 19:18:01.038123   14476 retry.go:31] will retry after 254.09µs: open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/scheduled-stop-357005/pid: no such file or directory
I0919 19:18:01.039241   14476 retry.go:31] will retry after 337.306µs: open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/scheduled-stop-357005/pid: no such file or directory
I0919 19:18:01.040359   14476 retry.go:31] will retry after 339.823µs: open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/scheduled-stop-357005/pid: no such file or directory
I0919 19:18:01.041486   14476 retry.go:31] will retry after 758.203µs: open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/scheduled-stop-357005/pid: no such file or directory
I0919 19:18:01.042600   14476 retry.go:31] will retry after 600.787µs: open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/scheduled-stop-357005/pid: no such file or directory
I0919 19:18:01.043717   14476 retry.go:31] will retry after 1.762475ms: open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/scheduled-stop-357005/pid: no such file or directory
I0919 19:18:01.045946   14476 retry.go:31] will retry after 3.636039ms: open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/scheduled-stop-357005/pid: no such file or directory
I0919 19:18:01.050260   14476 retry.go:31] will retry after 2.522863ms: open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/scheduled-stop-357005/pid: no such file or directory
I0919 19:18:01.053463   14476 retry.go:31] will retry after 7.885188ms: open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/scheduled-stop-357005/pid: no such file or directory
I0919 19:18:01.061659   14476 retry.go:31] will retry after 11.460457ms: open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/scheduled-stop-357005/pid: no such file or directory
I0919 19:18:01.073869   14476 retry.go:31] will retry after 18.26687ms: open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/scheduled-stop-357005/pid: no such file or directory
I0919 19:18:01.093101   14476 retry.go:31] will retry after 24.453824ms: open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/scheduled-stop-357005/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-357005 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-357005 -n scheduled-stop-357005
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-357005
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-357005 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-357005
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-357005: exit status 7 (59.033126ms)

                                                
                                                
-- stdout --
	scheduled-stop-357005
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-357005 -n scheduled-stop-357005
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-357005 -n scheduled-stop-357005: exit status 7 (59.612196ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-357005" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-357005
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-357005: (1.578025073s)
--- PASS: TestScheduledStopUnix (94.73s)

                                                
                                    
x
+
TestSkaffold (98.19s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1908893965 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-217660 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-217660 --memory=2600 --driver=docker  --container-runtime=docker: (23.81437509s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1908893965 run --minikube-profile skaffold-217660 --kube-context skaffold-217660 --status-check=true --port-forward=false --interactive=false
E0919 19:20:29.935831   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1908893965 run --minikube-profile skaffold-217660 --kube-context skaffold-217660 --status-check=true --port-forward=false --interactive=false: (1m0.003067052s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7f4d448884-t2cbd" [286da4c2-6ad0-462f-b44a-edc7f7f2c910] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003358099s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7ddf977b4-6d8rh" [b3036311-43fd-45b9-8200-b6282f506d6f] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003458693s
helpers_test.go:175: Cleaning up "skaffold-217660" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-217660
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-217660: (2.674734445s)
--- PASS: TestSkaffold (98.19s)

                                                
                                    
x
+
TestInsufficientStorage (9.45s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-559168 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-559168 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (7.353783209s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"51f049c9-b535-4a50-8535-18dbaa3e32f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-559168] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f1be0ca3-f690-4e83-bf74-4337a1e8d015","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19664"}}
	{"specversion":"1.0","id":"9e8d1be5-47c0-4fa9-98f4-bf55a9f107c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5d91a5eb-9cb3-4e71-a00f-82535d541839","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19664-7708/kubeconfig"}}
	{"specversion":"1.0","id":"d529d6ba-cc43-4cd6-9a88-cd0bb3020a9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7708/.minikube"}}
	{"specversion":"1.0","id":"d059cac7-eb17-441b-8129-a0b458a08b96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9c39ce73-96f9-4b65-8222-0845d34f6beb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"34a04fc5-b3b9-4a1c-bf6b-5970cf6ac18d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f26e74a6-1b01-4801-853d-8866c7f64a84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"03da0955-4f9f-4823-8fac-412583eec0cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"deda899b-0cd9-4e79-9df2-983f3a343fae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b37c3122-7480-4356-8ddd-5d25687acf49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-559168\" primary control-plane node in \"insufficient-storage-559168\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"321a30a9-14f5-435a-a313-3def75bcab8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726589491-19662 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c14e6558-c9e6-428e-a2ff-65e4e462579f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"13bb2a8a-a80b-4d7d-a661-b84f9f17983e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-559168 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-559168 --output=json --layout=cluster: exit status 7 (246.198146ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-559168","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-559168","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 19:20:59.226939  271279 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-559168" does not appear in /home/jenkins/minikube-integration/19664-7708/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-559168 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-559168 --output=json --layout=cluster: exit status 7 (248.497059ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-559168","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-559168","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 19:20:59.476045  271380 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-559168" does not appear in /home/jenkins/minikube-integration/19664-7708/kubeconfig
	E0919 19:20:59.485242  271380 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/insufficient-storage-559168/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-559168" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-559168
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-559168: (1.597855633s)
--- PASS: TestInsufficientStorage (9.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (102.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2477484057 start -p running-upgrade-991811 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2477484057 start -p running-upgrade-991811 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m10.685175923s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-991811 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-991811 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (26.82730285s)
helpers_test.go:175: Cleaning up "running-upgrade-991811" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-991811
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-991811: (3.797968777s)
--- PASS: TestRunningBinaryUpgrade (102.04s)

                                                
                                    
x
+
TestKubernetesUpgrade (326.09s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-782834 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-782834 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.357955529s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-782834
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-782834: (1.181408226s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-782834 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-782834 status --format={{.Host}}: exit status 7 (68.195606ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-782834 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-782834 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m29.711123104s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-782834 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-782834 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-782834 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (66.382275ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-782834] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-7708/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7708/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-782834
	    minikube start -p kubernetes-upgrade-782834 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7828342 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-782834 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-782834 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-782834 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (20.174197851s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-782834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-782834
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-782834: (2.463934047s)
--- PASS: TestKubernetesUpgrade (326.09s)

                                                
                                    
x
+
TestMissingContainerUpgrade (130.25s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3659363992 start -p missing-upgrade-943018 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3659363992 start -p missing-upgrade-943018 --memory=2200 --driver=docker  --container-runtime=docker: (1m8.809805134s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-943018
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-943018: (10.454523203s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-943018
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-943018 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0919 19:22:25.546520   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-943018 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (48.218980827s)
helpers_test.go:175: Cleaning up "missing-upgrade-943018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-943018
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-943018: (2.182628605s)
--- PASS: TestMissingContainerUpgrade (130.25s)

                                                
                                    
x
+
TestPause/serial/Start (34.92s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-261011 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-261011 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (34.920051658s)
--- PASS: TestPause/serial/Start (34.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-132272 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-132272 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (66.401472ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-132272] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-7708/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7708/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (26.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-132272 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-132272 --driver=docker  --container-runtime=docker: (26.133850702s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-132272 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (26.48s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (33.09s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-261011 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-261011 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (33.07186913s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (33.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-132272 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-132272 --no-kubernetes --driver=docker  --container-runtime=docker: (5.348709844s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-132272 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-132272 status -o json: exit status 2 (266.899414ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-132272","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-132272
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-132272: (1.6537495s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-132272 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-132272 --no-kubernetes --driver=docker  --container-runtime=docker: (8.555540847s)
--- PASS: TestNoKubernetes/serial/Start (8.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-132272 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-132272 "sudo systemctl is-active --quiet service kubelet": exit status 1 (244.45981ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.75s)

                                                
                                    
x
+
TestPause/serial/Pause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-261011 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.61s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-261011 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-261011 --output=json --layout=cluster: exit status 2 (310.296678ms)

                                                
                                                
-- stdout --
	{"Name":"pause-261011","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-261011","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-132272
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-132272: (1.205780276s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.46s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-261011 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.46s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.69s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-261011 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-132272 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-132272 --driver=docker  --container-runtime=docker: (8.35244646s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.35s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.18s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-261011 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-261011 --alsologtostderr -v=5: (4.178010063s)
--- PASS: TestPause/serial/DeletePaused (4.18s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.72s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-261011
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-261011: exit status 1 (16.703774ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-261011: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (62.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3630412947 start -p stopped-upgrade-442832 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3630412947 start -p stopped-upgrade-442832 --memory=2200 --vm-driver=docker  --container-runtime=docker: (28.344968925s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3630412947 -p stopped-upgrade-442832 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3630412947 -p stopped-upgrade-442832 stop: (10.764865689s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-442832 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-442832 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (23.129710736s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (62.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-132272 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-132272 "sudo systemctl is-active --quiet service kubelet": exit status 1 (284.245074ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (62.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-717995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-717995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m2.796611591s)
--- PASS: TestNetworkPlugins/group/auto/Start (62.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-442832
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-442832: (1.047695472s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (45.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-717995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-717995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (45.61361504s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (45.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-717995 "pgrep -a kubelet"
I0919 19:25:11.129615   14476 config.go:182] Loaded profile config "auto-717995": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-717995 replace --force -f testdata/netcat-deployment.yaml
I0919 19:25:11.470392   14476 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0919 19:25:11.676481   14476 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hldzc" [8cdc0555-dddc-4471-848a-8acc296481f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hldzc" [8cdc0555-dddc-4471-848a-8acc296481f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003728233s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-717995 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-717995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-717995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (58.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-717995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0919 19:25:40.523956   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/skaffold-217660/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-717995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (58.989199703s)
--- PASS: TestNetworkPlugins/group/calico/Start (58.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-9wghq" [daaf3d69-122a-4a54-8a8c-f2f5e6a113cf] Running
E0919 19:25:58.449104   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/skaffold-217660/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003629604s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-717995 "pgrep -a kubelet"
I0919 19:26:00.002230   14476 config.go:182] Loaded profile config "kindnet-717995": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-717995 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-l7dst" [450c0452-f462-4041-b131-c0c6e1b023fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-l7dst" [450c0452-f462-4041-b131-c0c6e1b023fc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004421287s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (47.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-717995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-717995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (47.662903999s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (47.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-717995 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-717995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-717995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (69.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-717995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-717995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m9.901582323s)
--- PASS: TestNetworkPlugins/group/false/Start (69.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4pmfc" [7c683c7e-df04-416b-bd11-b4015f8885e1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004728475s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-717995 "pgrep -a kubelet"
I0919 19:26:45.502721   14476 config.go:182] Loaded profile config "calico-717995": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-717995 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-48jhp" [96014ae5-a408-4825-874a-f443fce342c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-48jhp" [96014ae5-a408-4825-874a-f443fce342c0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004861839s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-717995 "pgrep -a kubelet"
I0919 19:26:52.119717   14476 config.go:182] Loaded profile config "custom-flannel-717995": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-717995 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6lp68" [1eaaf0b3-22d3-4426-9b8c-41ddb3da7517] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6lp68" [1eaaf0b3-22d3-4426-9b8c-41ddb3da7517] Running
E0919 19:26:59.892589   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/skaffold-217660/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.005517793s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-717995 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-717995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-717995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-717995 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-717995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-717995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (66.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-717995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-717995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m6.04185517s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (66.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (43.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-717995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0919 19:27:25.545815   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-717995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (43.722105229s)
--- PASS: TestNetworkPlugins/group/flannel/Start (43.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-717995 "pgrep -a kubelet"
I0919 19:27:41.615315   14476 config.go:182] Loaded profile config "false-717995": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-717995 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-b96pn" [f986e51f-1266-4e48-ba54-a2f6780a4af8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-b96pn" [f986e51f-1266-4e48-ba54-a2f6780a4af8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.003537181s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-717995 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-717995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-717995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-vnjln" [9cdb1a90-4d5a-4ca2-acce-2c8760e437f3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003176864s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-717995 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (67.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-717995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
I0919 19:28:10.548798   14476 config.go:182] Loaded profile config "flannel-717995": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-717995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m7.789883815s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (67.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-717995 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bfzr5" [7cb9650e-1236-4add-8f90-bf7f1c5b4219] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bfzr5" [7cb9650e-1236-4add-8f90-bf7f1c5b4219] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003516015s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-717995 "pgrep -a kubelet"
I0919 19:28:20.278536   14476 config.go:182] Loaded profile config "enable-default-cni-717995": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-717995 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pg7vv" [7754bdf6-47f4-4bee-8e8f-ddbcae41184d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pg7vv" [7754bdf6-47f4-4bee-8e8f-ddbcae41184d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004518173s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-717995 exec deployment/netcat -- nslookup kubernetes.default
E0919 19:28:21.814837   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/skaffold-217660/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-717995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-717995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-717995 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-717995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-717995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (33.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-717995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-717995 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (33.286828511s)
--- PASS: TestNetworkPlugins/group/bridge/Start (33.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (132.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-455243 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-455243 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m12.154611108s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (132.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-827971 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-827971 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m8.766154862s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-717995 "pgrep -a kubelet"
I0919 19:29:15.198293   14476 config.go:182] Loaded profile config "bridge-717995": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-717995 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c7zxk" [251001b5-8ab2-4248-ab30-60915c39274f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-c7zxk" [251001b5-8ab2-4248-ab30-60915c39274f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003424519s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-717995 "pgrep -a kubelet"
I0919 19:29:18.511210   14476 config.go:182] Loaded profile config "kubenet-717995": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-717995 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cb87w" [c3449e63-393a-410c-8d47-4afba7badf16] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-cb87w" [c3449e63-393a-410c-8d47-4afba7badf16] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 8.004400278s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (21.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-717995 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-717995 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13227081s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0919 19:29:40.579734   14476 retry.go:31] will retry after 816.134863ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context bridge-717995 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context bridge-717995 exec deployment/netcat -- nslookup kubernetes.default: (5.124100101s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (21.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-717995 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-717995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-717995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (67.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-669664 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-669664 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m7.178509675s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (67.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-717995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-717995 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)
E0919 19:34:42.395475   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/enable-default-cni-717995/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-827971 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8dfd4edf-4c47-4bba-94d4-8edd36b49eac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8dfd4edf-4c47-4bba-94d4-8edd36b49eac] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004060725s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-827971 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (35.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-062865 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-062865 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (35.295845581s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (35.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-827971 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-827971 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.669933052s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-827971 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-827971 --alsologtostderr -v=3
E0919 19:30:11.465393   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/auto-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:11.471783   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/auto-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:11.483595   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/auto-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:11.505553   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/auto-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:11.546925   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/auto-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:11.631730   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/auto-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:11.793761   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/auto-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:12.115721   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/auto-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:12.757898   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/auto-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:14.039906   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/auto-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:16.601995   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/auto-717995/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-827971 --alsologtostderr -v=3: (11.034041991s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-827971 -n no-preload-827971
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-827971 -n no-preload-827971: exit status 7 (155.002695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-827971 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (263.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-827971 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0919 19:30:21.724192   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/auto-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:29.935941   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:31.965474   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/auto-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:37.945698   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/skaffold-217660/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-827971 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m22.782442188s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-827971 -n no-preload-827971
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (263.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-062865 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [61d795cc-81cc-4ef1-a63d-0b06c77e4a08] Pending
helpers_test.go:344: "busybox" [61d795cc-81cc-4ef1-a63d-0b06c77e4a08] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [61d795cc-81cc-4ef1-a63d-0b06c77e4a08] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004075488s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-062865 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-062865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-062865 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-062865 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-062865 --alsologtostderr -v=3: (10.765154358s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-669664 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8d43af4a-e399-4022-9f8e-bdd8dca1d265] Pending
E0919 19:30:52.447212   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/auto-717995/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [8d43af4a-e399-4022-9f8e-bdd8dca1d265] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0919 19:30:53.682175   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kindnet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:53.688577   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kindnet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:53.699934   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kindnet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:53.721354   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kindnet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:53.762783   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kindnet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:53.844243   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kindnet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:54.005760   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kindnet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:54.327188   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kindnet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:54.969209   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kindnet-717995/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [8d43af4a-e399-4022-9f8e-bdd8dca1d265] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004103796s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-669664 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-455243 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dd2a4375-2318-45f0-a43f-26a21391d31b] Pending
helpers_test.go:344: "busybox" [dd2a4375-2318-45f0-a43f-26a21391d31b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0919 19:30:56.251264   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kindnet-717995/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [dd2a4375-2318-45f0-a43f-26a21391d31b] Running
E0919 19:30:58.813404   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kindnet-717995/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.00374193s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-455243 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-062865 -n default-k8s-diff-port-062865
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-062865 -n default-k8s-diff-port-062865: exit status 7 (72.989776ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-062865 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-669664 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-669664 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-062865 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-062865 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m22.263272259s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-062865 -n default-k8s-diff-port-062865
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-669664 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-669664 --alsologtostderr -v=3: (10.795183627s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-455243 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-455243 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-455243 --alsologtostderr -v=3
E0919 19:31:03.935684   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kindnet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:05.656114   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/skaffold-217660/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-455243 --alsologtostderr -v=3: (10.882477176s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-669664 -n embed-certs-669664
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-669664 -n embed-certs-669664: exit status 7 (109.715832ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-669664 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (265.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-669664 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-669664 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m25.337832953s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-669664 -n embed-certs-669664
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (265.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-455243 -n old-k8s-version-455243
E0919 19:31:14.177728   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kindnet-717995/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-455243 -n old-k8s-version-455243: exit status 7 (84.395564ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-455243 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (130.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-455243 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0919 19:31:33.409363   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/auto-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:34.664496   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kindnet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:39.182337   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/calico-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:39.188652   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/calico-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:39.200023   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/calico-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:39.221415   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/calico-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:39.262782   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/calico-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:39.344269   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/calico-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:39.506539   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/calico-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:39.828194   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/calico-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:40.470398   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/calico-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:41.751710   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/calico-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:44.313729   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/calico-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:49.435366   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/calico-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:52.325373   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/custom-flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:52.331782   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/custom-flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:52.343146   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/custom-flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:52.364475   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/custom-flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:52.405830   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/custom-flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:52.487269   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/custom-flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:52.649093   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/custom-flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:52.970817   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/custom-flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:53.612336   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/custom-flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:54.893994   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/custom-flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:57.456311   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/custom-flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:59.677329   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/calico-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:02.577730   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/custom-flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:12.819470   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/custom-flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:15.626002   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kindnet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:20.158857   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/calico-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:25.546211   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/addons-807343/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:33.301132   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/custom-flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:41.817420   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/false-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:41.823790   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/false-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:41.835144   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/false-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:41.856604   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/false-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:41.898019   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/false-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:41.979359   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/false-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:42.140870   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/false-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:42.462660   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/false-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:43.104739   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/false-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:44.386570   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/false-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:46.948535   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/false-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:52.070530   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/false-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:32:55.331304   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/auto-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:01.120236   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/calico-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:02.312794   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/false-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:04.277725   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:04.284070   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:04.295383   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:04.316646   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:04.357981   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:04.439402   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:04.600895   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:04.922418   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:05.564617   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:06.846641   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:09.408833   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:14.262798   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/custom-flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:14.530741   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:20.457846   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/enable-default-cni-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:20.464224   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/enable-default-cni-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:20.475565   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/enable-default-cni-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:20.496920   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/enable-default-cni-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:20.538308   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/enable-default-cni-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:20.620368   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/enable-default-cni-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:20.782112   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/enable-default-cni-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:21.103768   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/enable-default-cni-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:21.745196   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/enable-default-cni-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:22.795093   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/false-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:23.026593   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/enable-default-cni-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:24.772116   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-455243 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m10.659774298s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-455243 -n old-k8s-version-455243
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (130.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-nj4hj" [49a74bb4-5612-48b0-9458-03bf2d2a68ac] Running
E0919 19:33:25.588767   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/enable-default-cni-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:33:30.710330   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/enable-default-cni-717995/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003468641s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-nj4hj" [49a74bb4-5612-48b0-9458-03bf2d2a68ac] Running
E0919 19:33:33.006104   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004069786s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-455243 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-455243 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-455243 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-455243 -n old-k8s-version-455243
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-455243 -n old-k8s-version-455243: exit status 2 (287.712664ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-455243 -n old-k8s-version-455243
E0919 19:33:37.547292   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kindnet-717995/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-455243 -n old-k8s-version-455243: exit status 2 (280.638017ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-455243 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-455243 -n old-k8s-version-455243
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-455243 -n old-k8s-version-455243
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-306313 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0919 19:33:45.254348   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:01.434034   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/enable-default-cni-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:03.756619   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/false-717995/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-306313 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (28.008402185s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-306313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-306313 --alsologtostderr -v=3
E0919 19:34:15.433034   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/bridge-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:15.439368   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/bridge-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:15.450685   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/bridge-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:15.471997   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/bridge-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:15.513358   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/bridge-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:15.594763   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/bridge-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:15.756501   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/bridge-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:16.078312   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/bridge-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:16.720278   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/bridge-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:18.002062   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/bridge-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:18.703023   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kubenet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:18.709394   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kubenet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:18.720769   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kubenet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:18.742146   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kubenet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:18.783504   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kubenet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:18.864958   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kubenet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:19.026504   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kubenet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:19.348265   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kubenet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:19.990386   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kubenet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:20.563484   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/bridge-717995/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-306313 --alsologtostderr -v=3: (10.728646726s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-306313 -n newest-cni-306313
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-306313 -n newest-cni-306313: exit status 7 (127.619073ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-306313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-306313 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0919 19:34:21.272451   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kubenet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:23.041603   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/calico-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:23.834547   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kubenet-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:25.685460   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/bridge-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:26.215792   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:34:28.956346   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/kubenet-717995/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-306313 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (13.689170506s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-306313 -n newest-cni-306313
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-306313 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-306313 --alsologtostderr -v=1
E0919 19:34:35.926948   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/bridge-717995/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-306313 -n newest-cni-306313
E0919 19:34:36.184498   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/custom-flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-306313 -n newest-cni-306313: exit status 2 (284.578872ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-306313 -n newest-cni-306313
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-306313 -n newest-cni-306313: exit status 2 (293.382567ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-306313 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-306313 -n newest-cni-306313
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-306313 -n newest-cni-306313
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mfxsm" [f639fb22-24a6-4495-bcb7-8c325db7c785] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00369595s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mfxsm" [f639fb22-24a6-4495-bcb7-8c325db7c785] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004176787s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-827971 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-827971 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-827971 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-827971 -n no-preload-827971
E0919 19:34:56.408793   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/bridge-717995/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-827971 -n no-preload-827971: exit status 2 (274.610788ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-827971 -n no-preload-827971
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-827971 -n no-preload-827971: exit status 2 (280.040558ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-827971 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-827971 -n no-preload-827971
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-827971 -n no-preload-827971
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8452m" [c8810128-9c2f-4e54-a26c-f89b29b1ea35] Running
E0919 19:35:25.678005   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/false-717995/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:35:29.936316   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/functional-847211/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003252579s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8452m" [c8810128-9c2f-4e54-a26c-f89b29b1ea35] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004150338s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-062865 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-062865 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-062865 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-062865 -n default-k8s-diff-port-062865
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-062865 -n default-k8s-diff-port-062865: exit status 2 (281.429023ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-062865 -n default-k8s-diff-port-062865
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-062865 -n default-k8s-diff-port-062865: exit status 2 (280.311391ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-062865 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-062865 -n default-k8s-diff-port-062865
E0919 19:35:37.370693   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/bridge-717995/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-062865 -n default-k8s-diff-port-062865
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-chxcx" [c85ca978-11f3-4f36-b933-0207e820bcd4] Running
E0919 19:35:39.172648   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/auto-717995/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003007722s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-chxcx" [c85ca978-11f3-4f36-b933-0207e820bcd4] Running
E0919 19:35:45.886464   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/default-k8s-diff-port-062865/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:35:48.137090   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/flannel-717995/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003282304s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-669664 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-669664 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-669664 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-669664 -n embed-certs-669664
E0919 19:35:51.008769   14476 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7708/.minikube/profiles/default-k8s-diff-port-062865/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-669664 -n embed-certs-669664: exit status 2 (275.086292ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-669664 -n embed-certs-669664
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-669664 -n embed-certs-669664: exit status 2 (270.247152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-669664 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-669664 -n embed-certs-669664
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-669664 -n embed-certs-669664
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.20s)

                                                
                                    

Test skip (20/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-717995 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-717995

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-717995

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-717995

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-717995

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-717995

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-717995

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-717995

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-717995

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-717995

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-717995

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-717995

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-717995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-717995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-717995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-717995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-717995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-717995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-717995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-717995" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-717995

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-717995

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-717995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-717995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-717995

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-717995

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-717995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-717995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-717995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-717995" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-717995" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-717995

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-717995" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-717995"

                                                
                                                
----------------------- debugLogs end: cilium-717995 [took: 3.613566252s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-717995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-717995
--- SKIP: TestNetworkPlugins/group/cilium (3.81s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-823044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-823044
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard