Test Report: Docker_Linux 19648

                    
                      5a5b9bbbb8805a9ff40b088174fcc86278d72994:2024-09-15:36226
                    
                

Test fail (1/343)

Order failed test Duration
33 TestAddons/parallel/Registry 74.17
x
+
TestAddons/parallel/Registry (74.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.021461ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-85p89" [7ddd4a6c-0bb9-4cdd-b2c2-6a358cc36131] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002422376s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lrwnn" [727ad348-b4a0-40a9-a423-cac288b38182] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003645936s
addons_test.go:342: (dbg) Run:  kubectl --context addons-924081 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-924081 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-924081 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.07668605s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-924081 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-924081 ip
2024/09/15 18:10:30 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-924081 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-924081
helpers_test.go:235: (dbg) docker inspect addons-924081:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a7cb3ed838c674e83e689853d4795f1939ee1245471508e2b1ba6731e1e9be10",
	        "Created": "2024-09-15T17:57:25.705521635Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 20059,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-15T17:57:25.832384703Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/a7cb3ed838c674e83e689853d4795f1939ee1245471508e2b1ba6731e1e9be10/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a7cb3ed838c674e83e689853d4795f1939ee1245471508e2b1ba6731e1e9be10/hostname",
	        "HostsPath": "/var/lib/docker/containers/a7cb3ed838c674e83e689853d4795f1939ee1245471508e2b1ba6731e1e9be10/hosts",
	        "LogPath": "/var/lib/docker/containers/a7cb3ed838c674e83e689853d4795f1939ee1245471508e2b1ba6731e1e9be10/a7cb3ed838c674e83e689853d4795f1939ee1245471508e2b1ba6731e1e9be10-json.log",
	        "Name": "/addons-924081",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-924081:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-924081",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d2a0e525c4e6f452f92853fadd7761a978cce1938f86d89fd0a33cbc212950bc-init/diff:/var/lib/docker/overlay2/98b43be93661840522f6675504552b2073bca744c9d1abb04e8ebf1b5d0c4763/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d2a0e525c4e6f452f92853fadd7761a978cce1938f86d89fd0a33cbc212950bc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d2a0e525c4e6f452f92853fadd7761a978cce1938f86d89fd0a33cbc212950bc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d2a0e525c4e6f452f92853fadd7761a978cce1938f86d89fd0a33cbc212950bc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-924081",
	                "Source": "/var/lib/docker/volumes/addons-924081/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-924081",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-924081",
	                "name.minikube.sigs.k8s.io": "addons-924081",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "542f46370b0447adce501b41bd00ee0b59fcd50cbeaa695b53af39b56bbe6f8a",
	            "SandboxKey": "/var/run/docker/netns/542f46370b04",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-924081": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e9a913bd60604fddd0465b2a2773a4dbe96b68b8889e384fffda11e3908b4879",
	                    "EndpointID": "866004ebbc4d5475d2448e02162ddb9c7bf43ac752c9e65d8488858df1555be4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-924081",
	                        "a7cb3ed838c6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-924081 -n addons-924081
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-924081 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-924081 logs -n 25: (1.326281315s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-579642 | jenkins | v1.34.0 | 15 Sep 24 17:57 UTC |                     |
	|         | download-docker-579642                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-579642                                                                   | download-docker-579642 | jenkins | v1.34.0 | 15 Sep 24 17:57 UTC | 15 Sep 24 17:57 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-308651   | jenkins | v1.34.0 | 15 Sep 24 17:57 UTC |                     |
	|         | binary-mirror-308651                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33109                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-308651                                                                     | binary-mirror-308651   | jenkins | v1.34.0 | 15 Sep 24 17:57 UTC | 15 Sep 24 17:57 UTC |
	| addons  | disable dashboard -p                                                                        | addons-924081          | jenkins | v1.34.0 | 15 Sep 24 17:57 UTC |                     |
	|         | addons-924081                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-924081          | jenkins | v1.34.0 | 15 Sep 24 17:57 UTC |                     |
	|         | addons-924081                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-924081 --wait=true                                                                | addons-924081          | jenkins | v1.34.0 | 15 Sep 24 17:57 UTC | 15 Sep 24 18:00 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-924081 addons disable                                                                | addons-924081          | jenkins | v1.34.0 | 15 Sep 24 18:01 UTC | 15 Sep 24 18:01 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-924081 addons disable                                                                | addons-924081          | jenkins | v1.34.0 | 15 Sep 24 18:09 UTC | 15 Sep 24 18:09 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-924081 ssh cat                                                                       | addons-924081          | jenkins | v1.34.0 | 15 Sep 24 18:09 UTC | 15 Sep 24 18:09 UTC |
	|         | /opt/local-path-provisioner/pvc-c592a443-dc12-4138-ba5c-46e5f18ad12e_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-924081 addons disable                                                                | addons-924081          | jenkins | v1.34.0 | 15 Sep 24 18:09 UTC | 15 Sep 24 18:10 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-924081          | jenkins | v1.34.0 | 15 Sep 24 18:09 UTC | 15 Sep 24 18:09 UTC |
	|         | -p addons-924081                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-924081          | jenkins | v1.34.0 | 15 Sep 24 18:09 UTC | 15 Sep 24 18:09 UTC |
	|         | addons-924081                                                                               |                        |         |         |                     |                     |
	| addons  | addons-924081 addons                                                                        | addons-924081          | jenkins | v1.34.0 | 15 Sep 24 18:09 UTC | 15 Sep 24 18:09 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-924081 addons                                                                        | addons-924081          | jenkins | v1.34.0 | 15 Sep 24 18:09 UTC | 15 Sep 24 18:09 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-924081 addons disable                                                                | addons-924081          | jenkins | v1.34.0 | 15 Sep 24 18:09 UTC | 15 Sep 24 18:09 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-924081 addons                                                                        | addons-924081          | jenkins | v1.34.0 | 15 Sep 24 18:10 UTC | 15 Sep 24 18:10 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-924081          | jenkins | v1.34.0 | 15 Sep 24 18:10 UTC | 15 Sep 24 18:10 UTC |
	|         | -p addons-924081                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-924081          | jenkins | v1.34.0 | 15 Sep 24 18:10 UTC | 15 Sep 24 18:10 UTC |
	|         | addons-924081                                                                               |                        |         |         |                     |                     |
	| addons  | addons-924081 addons disable                                                                | addons-924081          | jenkins | v1.34.0 | 15 Sep 24 18:10 UTC | 15 Sep 24 18:10 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-924081 ssh curl -s                                                                   | addons-924081          | jenkins | v1.34.0 | 15 Sep 24 18:10 UTC | 15 Sep 24 18:10 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-924081 ip                                                                            | addons-924081          | jenkins | v1.34.0 | 15 Sep 24 18:10 UTC | 15 Sep 24 18:10 UTC |
	| addons  | addons-924081 addons disable                                                                | addons-924081          | jenkins | v1.34.0 | 15 Sep 24 18:10 UTC | 15 Sep 24 18:10 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-924081 ip                                                                            | addons-924081          | jenkins | v1.34.0 | 15 Sep 24 18:10 UTC | 15 Sep 24 18:10 UTC |
	| addons  | addons-924081 addons disable                                                                | addons-924081          | jenkins | v1.34.0 | 15 Sep 24 18:10 UTC |                     |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 17:57:04
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 17:57:04.308539   19304 out.go:345] Setting OutFile to fd 1 ...
	I0915 17:57:04.308786   19304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 17:57:04.308795   19304 out.go:358] Setting ErrFile to fd 2...
	I0915 17:57:04.308799   19304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 17:57:04.308959   19304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-11129/.minikube/bin
	I0915 17:57:04.309553   19304 out.go:352] Setting JSON to false
	I0915 17:57:04.310358   19304 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2370,"bootTime":1726420654,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 17:57:04.310443   19304 start.go:139] virtualization: kvm guest
	I0915 17:57:04.312764   19304 out.go:177] * [addons-924081] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 17:57:04.314257   19304 notify.go:220] Checking for updates...
	I0915 17:57:04.314269   19304 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 17:57:04.315814   19304 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 17:57:04.317230   19304 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19648-11129/kubeconfig
	I0915 17:57:04.318630   19304 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-11129/.minikube
	I0915 17:57:04.320105   19304 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 17:57:04.321479   19304 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 17:57:04.323273   19304 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 17:57:04.346414   19304 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 17:57:04.346483   19304 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 17:57:04.399221   19304 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-15 17:57:04.390376351 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 17:57:04.399318   19304 docker.go:318] overlay module found
	I0915 17:57:04.401441   19304 out.go:177] * Using the docker driver based on user configuration
	I0915 17:57:04.402908   19304 start.go:297] selected driver: docker
	I0915 17:57:04.402926   19304 start.go:901] validating driver "docker" against <nil>
	I0915 17:57:04.402941   19304 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 17:57:04.403669   19304 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 17:57:04.450366   19304 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-15 17:57:04.441789997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 17:57:04.450534   19304 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 17:57:04.450805   19304 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 17:57:04.452918   19304 out.go:177] * Using Docker driver with root privileges
	I0915 17:57:04.454521   19304 cni.go:84] Creating CNI manager for ""
	I0915 17:57:04.454582   19304 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 17:57:04.454593   19304 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 17:57:04.454660   19304 start.go:340] cluster config:
	{Name:addons-924081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-924081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 17:57:04.456301   19304 out.go:177] * Starting "addons-924081" primary control-plane node in "addons-924081" cluster
	I0915 17:57:04.457843   19304 cache.go:121] Beginning downloading kic base image for docker with docker
	I0915 17:57:04.459411   19304 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0915 17:57:04.460916   19304 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 17:57:04.460962   19304 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19648-11129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0915 17:57:04.460971   19304 cache.go:56] Caching tarball of preloaded images
	I0915 17:57:04.461026   19304 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 17:57:04.461051   19304 preload.go:172] Found /home/jenkins/minikube-integration/19648-11129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0915 17:57:04.461060   19304 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 17:57:04.461365   19304 profile.go:143] Saving config to /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/config.json ...
	I0915 17:57:04.461388   19304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/config.json: {Name:mk891e370c5a5e03da92ab4a1a7be6c831238c28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 17:57:04.478285   19304 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 17:57:04.478422   19304 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 17:57:04.478442   19304 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0915 17:57:04.478452   19304 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0915 17:57:04.478461   19304 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0915 17:57:04.478466   19304 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0915 17:57:16.550329   19304 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0915 17:57:16.550362   19304 cache.go:194] Successfully downloaded all kic artifacts
	I0915 17:57:16.550401   19304 start.go:360] acquireMachinesLock for addons-924081: {Name:mk882a2e9c8c2eb74b376399b00c4f80ae1e143d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 17:57:16.550496   19304 start.go:364] duration metric: took 76.495µs to acquireMachinesLock for "addons-924081"
	I0915 17:57:16.550532   19304 start.go:93] Provisioning new machine with config: &{Name:addons-924081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-924081 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 17:57:16.550608   19304 start.go:125] createHost starting for "" (driver="docker")
	I0915 17:57:16.553398   19304 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0915 17:57:16.553616   19304 start.go:159] libmachine.API.Create for "addons-924081" (driver="docker")
	I0915 17:57:16.553648   19304 client.go:168] LocalClient.Create starting
	I0915 17:57:16.553772   19304 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19648-11129/.minikube/certs/ca.pem
	I0915 17:57:16.715619   19304 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19648-11129/.minikube/certs/cert.pem
	I0915 17:57:16.804781   19304 cli_runner.go:164] Run: docker network inspect addons-924081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0915 17:57:16.820095   19304 cli_runner.go:211] docker network inspect addons-924081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0915 17:57:16.820177   19304 network_create.go:284] running [docker network inspect addons-924081] to gather additional debugging logs...
	I0915 17:57:16.820200   19304 cli_runner.go:164] Run: docker network inspect addons-924081
	W0915 17:57:16.835536   19304 cli_runner.go:211] docker network inspect addons-924081 returned with exit code 1
	I0915 17:57:16.835564   19304 network_create.go:287] error running [docker network inspect addons-924081]: docker network inspect addons-924081: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-924081 not found
	I0915 17:57:16.835590   19304 network_create.go:289] output of [docker network inspect addons-924081]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-924081 not found
	
	** /stderr **
	I0915 17:57:16.835720   19304 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 17:57:16.851386   19304 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ad7520}
	I0915 17:57:16.851428   19304 network_create.go:124] attempt to create docker network addons-924081 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0915 17:57:16.851476   19304 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-924081 addons-924081
	I0915 17:57:16.910612   19304 network_create.go:108] docker network addons-924081 192.168.49.0/24 created
	I0915 17:57:16.910644   19304 kic.go:121] calculated static IP "192.168.49.2" for the "addons-924081" container
	I0915 17:57:16.910708   19304 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0915 17:57:16.925889   19304 cli_runner.go:164] Run: docker volume create addons-924081 --label name.minikube.sigs.k8s.io=addons-924081 --label created_by.minikube.sigs.k8s.io=true
	I0915 17:57:16.942454   19304 oci.go:103] Successfully created a docker volume addons-924081
	I0915 17:57:16.942529   19304 cli_runner.go:164] Run: docker run --rm --name addons-924081-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-924081 --entrypoint /usr/bin/test -v addons-924081:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0915 17:57:21.740470   19304 cli_runner.go:217] Completed: docker run --rm --name addons-924081-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-924081 --entrypoint /usr/bin/test -v addons-924081:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (4.797906778s)
	I0915 17:57:21.740493   19304 oci.go:107] Successfully prepared a docker volume addons-924081
	I0915 17:57:21.740503   19304 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 17:57:21.740522   19304 kic.go:194] Starting extracting preloaded images to volume ...
	I0915 17:57:21.740580   19304 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19648-11129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-924081:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0915 17:57:25.642519   19304 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19648-11129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-924081:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.901891731s)
	I0915 17:57:25.642551   19304 kic.go:203] duration metric: took 3.90202531s to extract preloaded images to volume ...
	W0915 17:57:25.642712   19304 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0915 17:57:25.642848   19304 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0915 17:57:25.691199   19304 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-924081 --name addons-924081 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-924081 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-924081 --network addons-924081 --ip 192.168.49.2 --volume addons-924081:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0915 17:57:25.989382   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Running}}
	I0915 17:57:26.006661   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:26.023949   19304 cli_runner.go:164] Run: docker exec addons-924081 stat /var/lib/dpkg/alternatives/iptables
	I0915 17:57:26.067786   19304 oci.go:144] the created container "addons-924081" has a running status.
	I0915 17:57:26.067821   19304 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa...
	I0915 17:57:26.216380   19304 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0915 17:57:26.238981   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:26.256666   19304 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0915 17:57:26.256688   19304 kic_runner.go:114] Args: [docker exec --privileged addons-924081 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0915 17:57:26.301516   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:26.318185   19304 machine.go:93] provisionDockerMachine start ...
	I0915 17:57:26.318262   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:26.346937   19304 main.go:141] libmachine: Using SSH client type: native
	I0915 17:57:26.347176   19304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 17:57:26.347192   19304 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 17:57:26.347937   19304 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60170->127.0.0.1:32768: read: connection reset by peer
	I0915 17:57:29.478002   19304 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-924081
	
	I0915 17:57:29.478033   19304 ubuntu.go:169] provisioning hostname "addons-924081"
	I0915 17:57:29.478094   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:29.494386   19304 main.go:141] libmachine: Using SSH client type: native
	I0915 17:57:29.494551   19304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 17:57:29.494564   19304 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-924081 && echo "addons-924081" | sudo tee /etc/hostname
	I0915 17:57:29.637518   19304 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-924081
	
	I0915 17:57:29.637591   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:29.655297   19304 main.go:141] libmachine: Using SSH client type: native
	I0915 17:57:29.655457   19304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 17:57:29.655473   19304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-924081' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-924081/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-924081' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 17:57:29.786575   19304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 17:57:29.786604   19304 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19648-11129/.minikube CaCertPath:/home/jenkins/minikube-integration/19648-11129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19648-11129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19648-11129/.minikube}
	I0915 17:57:29.786639   19304 ubuntu.go:177] setting up certificates
	I0915 17:57:29.786655   19304 provision.go:84] configureAuth start
	I0915 17:57:29.786713   19304 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-924081
	I0915 17:57:29.802003   19304 provision.go:143] copyHostCerts
	I0915 17:57:29.802087   19304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19648-11129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19648-11129/.minikube/ca.pem (1082 bytes)
	I0915 17:57:29.802207   19304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19648-11129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19648-11129/.minikube/cert.pem (1123 bytes)
	I0915 17:57:29.802288   19304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19648-11129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19648-11129/.minikube/key.pem (1679 bytes)
	I0915 17:57:29.802368   19304 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19648-11129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19648-11129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19648-11129/.minikube/certs/ca-key.pem org=jenkins.addons-924081 san=[127.0.0.1 192.168.49.2 addons-924081 localhost minikube]
	I0915 17:57:29.955135   19304 provision.go:177] copyRemoteCerts
	I0915 17:57:29.955208   19304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 17:57:29.955252   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:29.971995   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	I0915 17:57:30.066949   19304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-11129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 17:57:30.088131   19304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-11129/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0915 17:57:30.109332   19304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-11129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 17:57:30.129931   19304 provision.go:87] duration metric: took 343.25986ms to configureAuth
	I0915 17:57:30.129960   19304 ubuntu.go:193] setting minikube options for container-runtime
	I0915 17:57:30.130102   19304 config.go:182] Loaded profile config "addons-924081": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 17:57:30.130145   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:30.146651   19304 main.go:141] libmachine: Using SSH client type: native
	I0915 17:57:30.146839   19304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 17:57:30.146851   19304 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0915 17:57:30.278969   19304 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0915 17:57:30.278991   19304 ubuntu.go:71] root file system type: overlay
	I0915 17:57:30.279118   19304 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0915 17:57:30.279179   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:30.295356   19304 main.go:141] libmachine: Using SSH client type: native
	I0915 17:57:30.295543   19304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 17:57:30.295639   19304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0915 17:57:30.436901   19304 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0915 17:57:30.436973   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:30.453652   19304 main.go:141] libmachine: Using SSH client type: native
	I0915 17:57:30.453827   19304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 17:57:30.453843   19304 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0915 17:57:31.135153   19304 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:41.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-15 17:57:30.431076713 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0915 17:57:31.135185   19304 machine.go:96] duration metric: took 4.8169796s to provisionDockerMachine
	I0915 17:57:31.135200   19304 client.go:171] duration metric: took 14.581542314s to LocalClient.Create
	I0915 17:57:31.135220   19304 start.go:167] duration metric: took 14.581604833s to libmachine.API.Create "addons-924081"
	I0915 17:57:31.135231   19304 start.go:293] postStartSetup for "addons-924081" (driver="docker")
	I0915 17:57:31.135240   19304 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 17:57:31.135292   19304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 17:57:31.135333   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:31.151614   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	I0915 17:57:31.247237   19304 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 17:57:31.250046   19304 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 17:57:31.250084   19304 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 17:57:31.250094   19304 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 17:57:31.250103   19304 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0915 17:57:31.250113   19304 filesync.go:126] Scanning /home/jenkins/minikube-integration/19648-11129/.minikube/addons for local assets ...
	I0915 17:57:31.250168   19304 filesync.go:126] Scanning /home/jenkins/minikube-integration/19648-11129/.minikube/files for local assets ...
	I0915 17:57:31.250190   19304 start.go:296] duration metric: took 114.954264ms for postStartSetup
	I0915 17:57:31.250461   19304 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-924081
	I0915 17:57:31.266421   19304 profile.go:143] Saving config to /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/config.json ...
	I0915 17:57:31.266878   19304 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 17:57:31.266934   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:31.283483   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	I0915 17:57:31.375214   19304 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0915 17:57:31.379065   19304 start.go:128] duration metric: took 14.828441455s to createHost
	I0915 17:57:31.379091   19304 start.go:83] releasing machines lock for "addons-924081", held for 14.828581673s
	I0915 17:57:31.379155   19304 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-924081
	I0915 17:57:31.395144   19304 ssh_runner.go:195] Run: cat /version.json
	I0915 17:57:31.395198   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:31.395261   19304 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 17:57:31.395312   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:31.412033   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	I0915 17:57:31.413936   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	I0915 17:57:31.572370   19304 ssh_runner.go:195] Run: systemctl --version
	I0915 17:57:31.576279   19304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0915 17:57:31.580034   19304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0915 17:57:31.601338   19304 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0915 17:57:31.601400   19304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 17:57:31.625793   19304 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0915 17:57:31.625820   19304 start.go:495] detecting cgroup driver to use...
	I0915 17:57:31.625860   19304 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0915 17:57:31.625955   19304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 17:57:31.640220   19304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0915 17:57:31.648929   19304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0915 17:57:31.657623   19304 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0915 17:57:31.657677   19304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0915 17:57:31.666503   19304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0915 17:57:31.675548   19304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0915 17:57:31.684354   19304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0915 17:57:31.692956   19304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 17:57:31.701201   19304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0915 17:57:31.710396   19304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0915 17:57:31.719646   19304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0915 17:57:31.728544   19304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 17:57:31.735870   19304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 17:57:31.743748   19304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 17:57:31.819048   19304 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0915 17:57:31.911436   19304 start.go:495] detecting cgroup driver to use...
	I0915 17:57:31.911490   19304 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0915 17:57:31.911537   19304 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0915 17:57:31.923378   19304 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0915 17:57:31.923468   19304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0915 17:57:31.934300   19304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 17:57:31.949595   19304 ssh_runner.go:195] Run: which cri-dockerd
	I0915 17:57:31.952678   19304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0915 17:57:31.960895   19304 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0915 17:57:31.977120   19304 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0915 17:57:32.060316   19304 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0915 17:57:32.159196   19304 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0915 17:57:32.159313   19304 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0915 17:57:32.174909   19304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 17:57:32.252056   19304 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0915 17:57:32.491557   19304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0915 17:57:32.501563   19304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0915 17:57:32.511189   19304 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0915 17:57:32.594163   19304 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0915 17:57:32.674189   19304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 17:57:32.746745   19304 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0915 17:57:32.758466   19304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0915 17:57:32.767962   19304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 17:57:32.840060   19304 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0915 17:57:32.899978   19304 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0915 17:57:32.900072   19304 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0915 17:57:32.903980   19304 start.go:563] Will wait 60s for crictl version
	I0915 17:57:32.904038   19304 ssh_runner.go:195] Run: which crictl
	I0915 17:57:32.907033   19304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 17:57:32.938796   19304 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0915 17:57:32.938873   19304 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0915 17:57:32.961234   19304 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0915 17:57:32.986377   19304 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0915 17:57:32.986459   19304 cli_runner.go:164] Run: docker network inspect addons-924081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 17:57:33.002562   19304 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0915 17:57:33.005908   19304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 17:57:33.015547   19304 kubeadm.go:883] updating cluster {Name:addons-924081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-924081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 17:57:33.015680   19304 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 17:57:33.015736   19304 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 17:57:33.033509   19304 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0915 17:57:33.033531   19304 docker.go:615] Images already preloaded, skipping extraction
	I0915 17:57:33.033602   19304 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 17:57:33.050641   19304 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0915 17:57:33.050663   19304 cache_images.go:84] Images are preloaded, skipping loading
	I0915 17:57:33.050680   19304 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0915 17:57:33.050813   19304 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-924081 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-924081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 17:57:33.050876   19304 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0915 17:57:33.094941   19304 cni.go:84] Creating CNI manager for ""
	I0915 17:57:33.094970   19304 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 17:57:33.094982   19304 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 17:57:33.095000   19304 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-924081 NodeName:addons-924081 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 17:57:33.095115   19304 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-924081"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 17:57:33.095167   19304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 17:57:33.103187   19304 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 17:57:33.103248   19304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 17:57:33.111010   19304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0915 17:57:33.126679   19304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 17:57:33.142034   19304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0915 17:57:33.157513   19304 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0915 17:57:33.160572   19304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 17:57:33.169962   19304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 17:57:33.248922   19304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 17:57:33.261323   19304 certs.go:68] Setting up /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081 for IP: 192.168.49.2
	I0915 17:57:33.261343   19304 certs.go:194] generating shared ca certs ...
	I0915 17:57:33.261358   19304 certs.go:226] acquiring lock for ca certs: {Name:mk64df53462b737f6bb192ffdd1f8219c712c8d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 17:57:33.261482   19304 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19648-11129/.minikube/ca.key
	I0915 17:57:33.828799   19304 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-11129/.minikube/ca.crt ...
	I0915 17:57:33.828831   19304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-11129/.minikube/ca.crt: {Name:mkc86bdd0f64cf161ebb1f9d18f7e12f0930229b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 17:57:33.829021   19304 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-11129/.minikube/ca.key ...
	I0915 17:57:33.829034   19304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-11129/.minikube/ca.key: {Name:mk71b60007278d168819bd274a4d8c0891031495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 17:57:33.829135   19304 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19648-11129/.minikube/proxy-client-ca.key
	I0915 17:57:33.968475   19304 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-11129/.minikube/proxy-client-ca.crt ...
	I0915 17:57:33.968506   19304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-11129/.minikube/proxy-client-ca.crt: {Name:mk9add123944b044155f593aaca73a09db7f009c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 17:57:33.968694   19304 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-11129/.minikube/proxy-client-ca.key ...
	I0915 17:57:33.968708   19304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-11129/.minikube/proxy-client-ca.key: {Name:mk3cd78008207692728a87ff13ea020c575f799f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 17:57:33.968806   19304 certs.go:256] generating profile certs ...
	I0915 17:57:33.968877   19304 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.key
	I0915 17:57:33.968895   19304 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt with IP's: []
	I0915 17:57:34.138021   19304 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt ...
	I0915 17:57:34.138053   19304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: {Name:mkcf01dab9c3eab2bd9d85b6c5d4e5fd7f6abea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 17:57:34.138240   19304 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.key ...
	I0915 17:57:34.138254   19304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.key: {Name:mkbce5b274e547b56ef0a170132afdc391aaa5e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 17:57:34.138349   19304 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/apiserver.key.67f312cf
	I0915 17:57:34.138373   19304 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/apiserver.crt.67f312cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0915 17:57:34.246594   19304 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/apiserver.crt.67f312cf ...
	I0915 17:57:34.246628   19304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/apiserver.crt.67f312cf: {Name:mk3533e5a3b63037de3f1d7aa52276da7468cc89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 17:57:34.246829   19304 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/apiserver.key.67f312cf ...
	I0915 17:57:34.246846   19304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/apiserver.key.67f312cf: {Name:mk89d37c30c70ebc8f72c68d6cd8282f51cccd30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 17:57:34.246959   19304 certs.go:381] copying /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/apiserver.crt.67f312cf -> /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/apiserver.crt
	I0915 17:57:34.247072   19304 certs.go:385] copying /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/apiserver.key.67f312cf -> /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/apiserver.key
	I0915 17:57:34.247146   19304 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/proxy-client.key
	I0915 17:57:34.247175   19304 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/proxy-client.crt with IP's: []
	I0915 17:57:34.376761   19304 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/proxy-client.crt ...
	I0915 17:57:34.376801   19304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/proxy-client.crt: {Name:mke19914bd1151b8369b1920cd186fa0444a0ada Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 17:57:34.376979   19304 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/proxy-client.key ...
	I0915 17:57:34.376993   19304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/proxy-client.key: {Name:mk653aea631ad864bf723125c711eed4ec5692fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 17:57:34.377470   19304 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-11129/.minikube/certs/ca-key.pem (1679 bytes)
	I0915 17:57:34.377526   19304 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-11129/.minikube/certs/ca.pem (1082 bytes)
	I0915 17:57:34.377563   19304 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-11129/.minikube/certs/cert.pem (1123 bytes)
	I0915 17:57:34.377600   19304 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-11129/.minikube/certs/key.pem (1679 bytes)
	I0915 17:57:34.378667   19304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-11129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 17:57:34.400410   19304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-11129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 17:57:34.420558   19304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-11129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 17:57:34.440643   19304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-11129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0915 17:57:34.460549   19304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0915 17:57:34.480824   19304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0915 17:57:34.501075   19304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 17:57:34.521324   19304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0915 17:57:34.541376   19304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-11129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 17:57:34.561745   19304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 17:57:34.577223   19304 ssh_runner.go:195] Run: openssl version
	I0915 17:57:34.582087   19304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 17:57:34.590288   19304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 17:57:34.593394   19304 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0915 17:57:34.593437   19304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 17:57:34.599626   19304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 17:57:34.607833   19304 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 17:57:34.610719   19304 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 17:57:34.610775   19304 kubeadm.go:392] StartCluster: {Name:addons-924081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-924081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 17:57:34.610893   19304 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 17:57:34.627376   19304 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 17:57:34.635133   19304 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 17:57:34.642959   19304 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0915 17:57:34.643007   19304 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 17:57:34.650352   19304 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 17:57:34.650415   19304 kubeadm.go:157] found existing configuration files:
	
	I0915 17:57:34.650458   19304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 17:57:34.657662   19304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 17:57:34.657718   19304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 17:57:34.664809   19304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 17:57:34.672029   19304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 17:57:34.672080   19304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 17:57:34.679113   19304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 17:57:34.686221   19304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 17:57:34.686267   19304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 17:57:34.693329   19304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 17:57:34.700813   19304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 17:57:34.700862   19304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 17:57:34.708301   19304 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0915 17:57:34.743204   19304 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0915 17:57:34.743257   19304 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 17:57:34.763494   19304 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0915 17:57:34.763590   19304 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0915 17:57:34.763635   19304 kubeadm.go:310] OS: Linux
	I0915 17:57:34.763713   19304 kubeadm.go:310] CGROUPS_CPU: enabled
	I0915 17:57:34.763804   19304 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0915 17:57:34.763891   19304 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0915 17:57:34.763966   19304 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0915 17:57:34.764032   19304 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0915 17:57:34.764108   19304 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0915 17:57:34.764177   19304 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0915 17:57:34.764246   19304 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0915 17:57:34.764315   19304 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0915 17:57:34.812995   19304 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 17:57:34.813129   19304 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 17:57:34.813241   19304 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0915 17:57:34.822555   19304 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 17:57:34.825581   19304 out.go:235]   - Generating certificates and keys ...
	I0915 17:57:34.825677   19304 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 17:57:34.825750   19304 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 17:57:35.001064   19304 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0915 17:57:35.497811   19304 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0915 17:57:35.693447   19304 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0915 17:57:35.922868   19304 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0915 17:57:36.090153   19304 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0915 17:57:36.090269   19304 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-924081 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 17:57:36.249285   19304 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0915 17:57:36.249430   19304 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-924081 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 17:57:36.427098   19304 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0915 17:57:36.489847   19304 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0915 17:57:36.687487   19304 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0915 17:57:36.687597   19304 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 17:57:36.874296   19304 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 17:57:36.999566   19304 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0915 17:57:37.309176   19304 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 17:57:37.386534   19304 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 17:57:37.685456   19304 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 17:57:37.685861   19304 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 17:57:37.688257   19304 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 17:57:37.690365   19304 out.go:235]   - Booting up control plane ...
	I0915 17:57:37.690454   19304 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 17:57:37.690523   19304 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 17:57:37.691054   19304 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 17:57:37.699869   19304 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 17:57:37.704977   19304 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 17:57:37.705036   19304 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 17:57:37.790653   19304 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0915 17:57:37.790781   19304 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0915 17:57:38.293028   19304 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.7377ms
	I0915 17:57:38.293163   19304 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0915 17:57:42.794325   19304 kubeadm.go:310] [api-check] The API server is healthy after 4.501982833s
	I0915 17:57:42.805490   19304 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0915 17:57:42.815042   19304 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0915 17:57:42.831132   19304 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0915 17:57:42.831399   19304 kubeadm.go:310] [mark-control-plane] Marking the node addons-924081 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0915 17:57:42.837754   19304 kubeadm.go:310] [bootstrap-token] Using token: frpd3s.5ayuoav6svg3tyrq
	I0915 17:57:42.839041   19304 out.go:235]   - Configuring RBAC rules ...
	I0915 17:57:42.839186   19304 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0915 17:57:42.843325   19304 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0915 17:57:42.850055   19304 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0915 17:57:42.852413   19304 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0915 17:57:42.854608   19304 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0915 17:57:42.856994   19304 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0915 17:57:43.199889   19304 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0915 17:57:43.643324   19304 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0915 17:57:44.199722   19304 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0915 17:57:44.200527   19304 kubeadm.go:310] 
	I0915 17:57:44.200610   19304 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0915 17:57:44.200621   19304 kubeadm.go:310] 
	I0915 17:57:44.200696   19304 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0915 17:57:44.200704   19304 kubeadm.go:310] 
	I0915 17:57:44.200725   19304 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0915 17:57:44.200781   19304 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0915 17:57:44.200825   19304 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0915 17:57:44.200843   19304 kubeadm.go:310] 
	I0915 17:57:44.200933   19304 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0915 17:57:44.200953   19304 kubeadm.go:310] 
	I0915 17:57:44.201025   19304 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0915 17:57:44.201037   19304 kubeadm.go:310] 
	I0915 17:57:44.201112   19304 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0915 17:57:44.201213   19304 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0915 17:57:44.201310   19304 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0915 17:57:44.201320   19304 kubeadm.go:310] 
	I0915 17:57:44.201446   19304 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0915 17:57:44.201541   19304 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0915 17:57:44.201550   19304 kubeadm.go:310] 
	I0915 17:57:44.201626   19304 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token frpd3s.5ayuoav6svg3tyrq \
	I0915 17:57:44.201757   19304 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8a792a950dea0de4c6def1e7426f15475332fd5b2be459c37ac9a68238375f24 \
	I0915 17:57:44.201804   19304 kubeadm.go:310] 	--control-plane 
	I0915 17:57:44.201814   19304 kubeadm.go:310] 
	I0915 17:57:44.201930   19304 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0915 17:57:44.201939   19304 kubeadm.go:310] 
	I0915 17:57:44.202042   19304 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token frpd3s.5ayuoav6svg3tyrq \
	I0915 17:57:44.202182   19304 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8a792a950dea0de4c6def1e7426f15475332fd5b2be459c37ac9a68238375f24 
	I0915 17:57:44.204208   19304 kubeadm.go:310] W0915 17:57:34.740756    1924 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 17:57:44.204460   19304 kubeadm.go:310] W0915 17:57:34.741321    1924 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 17:57:44.204707   19304 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0915 17:57:44.204847   19304 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 17:57:44.204862   19304 cni.go:84] Creating CNI manager for ""
	I0915 17:57:44.204883   19304 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 17:57:44.206671   19304 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0915 17:57:44.207801   19304 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0915 17:57:44.215668   19304 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0915 17:57:44.231561   19304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 17:57:44.231656   19304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 17:57:44.231682   19304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-924081 minikube.k8s.io/updated_at=2024_09_15T17_57_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=6b3e75bb13951e1aa9da4105a14c95c8da7f2673 minikube.k8s.io/name=addons-924081 minikube.k8s.io/primary=true
	I0915 17:57:44.296293   19304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 17:57:44.334968   19304 ops.go:34] apiserver oom_adj: -16
	I0915 17:57:44.796522   19304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 17:57:45.297253   19304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 17:57:45.796957   19304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 17:57:46.296450   19304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 17:57:46.797158   19304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 17:57:47.297356   19304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 17:57:47.797014   19304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 17:57:48.297329   19304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 17:57:48.797243   19304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 17:57:48.864971   19304 kubeadm.go:1113] duration metric: took 4.633375412s to wait for elevateKubeSystemPrivileges
	I0915 17:57:48.865009   19304 kubeadm.go:394] duration metric: took 14.254237395s to StartCluster
	I0915 17:57:48.865032   19304 settings.go:142] acquiring lock: {Name:mk8fb406764d83efd0c2a982185f31c6d8eb1dd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 17:57:48.865143   19304 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19648-11129/kubeconfig
	I0915 17:57:48.865572   19304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-11129/kubeconfig: {Name:mk63f7e1b431103dccd36626008b13a19d1029e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 17:57:48.865765   19304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 17:57:48.865798   19304 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 17:57:48.865870   19304 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0915 17:57:48.865981   19304 addons.go:69] Setting yakd=true in profile "addons-924081"
	I0915 17:57:48.865999   19304 addons.go:69] Setting default-storageclass=true in profile "addons-924081"
	I0915 17:57:48.866023   19304 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-924081"
	I0915 17:57:48.866031   19304 addons.go:69] Setting registry=true in profile "addons-924081"
	I0915 17:57:48.866033   19304 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-924081"
	I0915 17:57:48.866037   19304 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-924081"
	I0915 17:57:48.866042   19304 addons.go:234] Setting addon registry=true in "addons-924081"
	I0915 17:57:48.866035   19304 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-924081"
	I0915 17:57:48.866049   19304 config.go:182] Loaded profile config "addons-924081": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 17:57:48.866050   19304 addons.go:69] Setting ingress-dns=true in profile "addons-924081"
	I0915 17:57:48.866069   19304 host.go:66] Checking if "addons-924081" exists ...
	I0915 17:57:48.866077   19304 addons.go:69] Setting inspektor-gadget=true in profile "addons-924081"
	I0915 17:57:48.866079   19304 addons.go:234] Setting addon ingress-dns=true in "addons-924081"
	I0915 17:57:48.866088   19304 addons.go:234] Setting addon inspektor-gadget=true in "addons-924081"
	I0915 17:57:48.866087   19304 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-924081"
	I0915 17:57:48.866106   19304 host.go:66] Checking if "addons-924081" exists ...
	I0915 17:57:48.866118   19304 host.go:66] Checking if "addons-924081" exists ...
	I0915 17:57:48.866121   19304 host.go:66] Checking if "addons-924081" exists ...
	I0915 17:57:48.866130   19304 addons.go:69] Setting storage-provisioner=true in profile "addons-924081"
	I0915 17:57:48.866148   19304 addons.go:234] Setting addon storage-provisioner=true in "addons-924081"
	I0915 17:57:48.866171   19304 host.go:66] Checking if "addons-924081" exists ...
	I0915 17:57:48.866182   19304 addons.go:69] Setting volcano=true in profile "addons-924081"
	I0915 17:57:48.866194   19304 addons.go:234] Setting addon volcano=true in "addons-924081"
	I0915 17:57:48.866023   19304 addons.go:69] Setting helm-tiller=true in profile "addons-924081"
	I0915 17:57:48.866215   19304 addons.go:234] Setting addon helm-tiller=true in "addons-924081"
	I0915 17:57:48.866217   19304 host.go:66] Checking if "addons-924081" exists ...
	I0915 17:57:48.866239   19304 host.go:66] Checking if "addons-924081" exists ...
	I0915 17:57:48.866452   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:48.866596   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:48.866605   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:48.866069   19304 host.go:66] Checking if "addons-924081" exists ...
	I0915 17:57:48.866665   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:48.866665   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:48.866014   19304 addons.go:69] Setting gcp-auth=true in profile "addons-924081"
	I0915 17:57:48.866678   19304 addons.go:69] Setting ingress=true in profile "addons-924081"
	I0915 17:57:48.866690   19304 addons.go:234] Setting addon ingress=true in "addons-924081"
	I0915 17:57:48.866693   19304 mustload.go:65] Loading cluster: addons-924081
	I0915 17:57:48.866714   19304 host.go:66] Checking if "addons-924081" exists ...
	I0915 17:57:48.866903   19304 config.go:182] Loaded profile config "addons-924081": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 17:57:48.867055   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:48.867153   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:48.867230   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:48.866007   19304 addons.go:234] Setting addon yakd=true in "addons-924081"
	I0915 17:57:48.867789   19304 host.go:66] Checking if "addons-924081" exists ...
	I0915 17:57:48.868305   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:48.866608   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:48.866005   19304 addons.go:69] Setting cloud-spanner=true in profile "addons-924081"
	I0915 17:57:48.870040   19304 addons.go:234] Setting addon cloud-spanner=true in "addons-924081"
	I0915 17:57:48.870076   19304 host.go:66] Checking if "addons-924081" exists ...
	I0915 17:57:48.870433   19304 addons.go:69] Setting volumesnapshots=true in profile "addons-924081"
	I0915 17:57:48.870486   19304 addons.go:234] Setting addon volumesnapshots=true in "addons-924081"
	I0915 17:57:48.870543   19304 host.go:66] Checking if "addons-924081" exists ...
	I0915 17:57:48.871158   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:48.866667   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:48.871972   19304 out.go:177] * Verifying Kubernetes components...
	I0915 17:57:48.866120   19304 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-924081"
	I0915 17:57:48.873738   19304 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-924081"
	I0915 17:57:48.874412   19304 addons.go:69] Setting metrics-server=true in profile "addons-924081"
	I0915 17:57:48.874436   19304 addons.go:234] Setting addon metrics-server=true in "addons-924081"
	I0915 17:57:48.874470   19304 host.go:66] Checking if "addons-924081" exists ...
	I0915 17:57:48.875327   19304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 17:57:48.866665   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:48.899982   19304 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0915 17:57:48.900626   19304 addons.go:234] Setting addon default-storageclass=true in "addons-924081"
	I0915 17:57:48.900671   19304 host.go:66] Checking if "addons-924081" exists ...
	I0915 17:57:48.901164   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:48.901184   19304 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 17:57:48.901199   19304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0915 17:57:48.901247   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:48.903802   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:48.904247   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:48.904923   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:48.909944   19304 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0915 17:57:48.912296   19304 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 17:57:48.913932   19304 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 17:57:48.914801   19304 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 17:57:48.915446   19304 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 17:57:48.915471   19304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0915 17:57:48.915535   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:48.915924   19304 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 17:57:48.915941   19304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 17:57:48.915992   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:48.923012   19304 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0915 17:57:48.924483   19304 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0915 17:57:48.925760   19304 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0915 17:57:48.928039   19304 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0915 17:57:48.928073   19304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0915 17:57:48.928133   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:48.933168   19304 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0915 17:57:48.934870   19304 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0915 17:57:48.934892   19304 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0915 17:57:48.934957   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:48.935379   19304 host.go:66] Checking if "addons-924081" exists ...
	I0915 17:57:48.942599   19304 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0915 17:57:48.945877   19304 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0915 17:57:48.945901   19304 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0915 17:57:48.945972   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:48.953101   19304 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0915 17:57:48.956809   19304 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 17:57:48.956840   19304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0915 17:57:48.956900   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:48.967385   19304 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0915 17:57:48.967425   19304 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0915 17:57:48.967387   19304 out.go:177]   - Using image docker.io/registry:2.8.3
	I0915 17:57:48.967385   19304 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0915 17:57:48.969697   19304 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0915 17:57:48.969720   19304 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0915 17:57:48.969803   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:48.970585   19304 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0915 17:57:48.970702   19304 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0915 17:57:48.970713   19304 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0915 17:57:48.970794   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:48.971771   19304 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0915 17:57:48.973293   19304 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0915 17:57:48.973395   19304 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0915 17:57:48.973414   19304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0915 17:57:48.973464   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:48.975345   19304 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0915 17:57:48.976431   19304 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0915 17:57:48.977503   19304 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0915 17:57:48.979113   19304 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 17:57:48.979130   19304 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 17:57:48.979195   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:48.979293   19304 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-924081"
	I0915 17:57:48.979329   19304 host.go:66] Checking if "addons-924081" exists ...
	I0915 17:57:48.979613   19304 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0915 17:57:48.980264   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:48.980958   19304 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0915 17:57:48.982109   19304 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0915 17:57:48.982299   19304 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0915 17:57:48.982313   19304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0915 17:57:48.982364   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:48.983195   19304 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0915 17:57:48.983214   19304 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0915 17:57:48.983342   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:48.996243   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	I0915 17:57:48.997602   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	I0915 17:57:48.998521   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	I0915 17:57:49.006098   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	I0915 17:57:49.007578   19304 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0915 17:57:49.009063   19304 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0915 17:57:49.009089   19304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0915 17:57:49.009149   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:49.011494   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	I0915 17:57:49.016020   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	I0915 17:57:49.017302   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	I0915 17:57:49.018172   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	I0915 17:57:49.020929   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	I0915 17:57:49.043599   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	I0915 17:57:49.044319   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	I0915 17:57:49.044565   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	I0915 17:57:49.046917   19304 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0915 17:57:49.048674   19304 out.go:177]   - Using image docker.io/busybox:stable
	I0915 17:57:49.049872   19304 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 17:57:49.049892   19304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0915 17:57:49.049949   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:49.050855   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	W0915 17:57:49.050875   19304 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0915 17:57:49.050904   19304 retry.go:31] will retry after 370.074531ms: ssh: handshake failed: EOF
	W0915 17:57:49.051710   19304 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0915 17:57:49.051734   19304 retry.go:31] will retry after 186.847865ms: ssh: handshake failed: EOF
	I0915 17:57:49.055480   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	I0915 17:57:49.070876   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	I0915 17:57:49.138742   19304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 17:57:49.138816   19304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W0915 17:57:49.239783   19304 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0915 17:57:49.239814   19304 retry.go:31] will retry after 514.010771ms: ssh: handshake failed: EOF
	I0915 17:57:49.335057   19304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 17:57:49.423838   19304 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0915 17:57:49.423863   19304 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0915 17:57:49.424206   19304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 17:57:49.426453   19304 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0915 17:57:49.426478   19304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0915 17:57:49.523896   19304 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0915 17:57:49.523979   19304 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0915 17:57:49.619586   19304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 17:57:49.643171   19304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 17:57:49.723332   19304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 17:57:49.728604   19304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0915 17:57:49.729014   19304 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0915 17:57:49.729036   19304 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0915 17:57:49.732445   19304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 17:57:49.737493   19304 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0915 17:57:49.737554   19304 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0915 17:57:49.737865   19304 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0915 17:57:49.737912   19304 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0915 17:57:49.738820   19304 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0915 17:57:49.738875   19304 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0915 17:57:49.824253   19304 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0915 17:57:49.824347   19304 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0915 17:57:49.832850   19304 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0915 17:57:49.832933   19304 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0915 17:57:50.022037   19304 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0915 17:57:50.022124   19304 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0915 17:57:50.039240   19304 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 17:57:50.039331   19304 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0915 17:57:50.130718   19304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0915 17:57:50.225053   19304 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0915 17:57:50.225141   19304 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0915 17:57:50.231657   19304 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0915 17:57:50.231744   19304 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0915 17:57:50.239021   19304 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0915 17:57:50.239099   19304 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0915 17:57:50.243614   19304 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0915 17:57:50.243677   19304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0915 17:57:50.521095   19304 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0915 17:57:50.521177   19304 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0915 17:57:50.526289   19304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 17:57:50.628948   19304 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0915 17:57:50.628979   19304 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0915 17:57:50.629296   19304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0915 17:57:50.635652   19304 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0915 17:57:50.635683   19304 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0915 17:57:50.934436   19304 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0915 17:57:50.934520   19304 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0915 17:57:51.037727   19304 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0915 17:57:51.037761   19304 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0915 17:57:51.128095   19304 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0915 17:57:51.128175   19304 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0915 17:57:51.235190   19304 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.096338923s)
	I0915 17:57:51.235275   19304 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0915 17:57:51.236505   19304 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.097717269s)
	I0915 17:57:51.237601   19304 node_ready.go:35] waiting up to 6m0s for node "addons-924081" to be "Ready" ...
	I0915 17:57:51.329027   19304 node_ready.go:49] node "addons-924081" has status "Ready":"True"
	I0915 17:57:51.329114   19304 node_ready.go:38] duration metric: took 91.456442ms for node "addons-924081" to be "Ready" ...
	I0915 17:57:51.329140   19304 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 17:57:51.345947   19304 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8r5p2" in "kube-system" namespace to be "Ready" ...
	I0915 17:57:51.520790   19304 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0915 17:57:51.520820   19304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0915 17:57:51.537777   19304 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0915 17:57:51.537863   19304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0915 17:57:51.636709   19304 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0915 17:57:51.636799   19304 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0915 17:57:51.740017   19304 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-924081" context rescaled to 1 replicas
	I0915 17:57:51.824428   19304 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0915 17:57:51.824518   19304 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0915 17:57:51.835400   19304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0915 17:57:51.843445   19304 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0915 17:57:51.843532   19304 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0915 17:57:51.939297   19304 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0915 17:57:51.939382   19304 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0915 17:57:52.021511   19304 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 17:57:52.021608   19304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0915 17:57:52.137520   19304 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0915 17:57:52.137611   19304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0915 17:57:52.442595   19304 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0915 17:57:52.442677   19304 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0915 17:57:52.522952   19304 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0915 17:57:52.523042   19304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0915 17:57:52.622009   19304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0915 17:57:52.629205   19304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 17:57:52.924453   19304 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 17:57:52.924542   19304 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0915 17:57:52.926331   19304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.591189503s)
	I0915 17:57:53.032394   19304 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 17:57:53.032423   19304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0915 17:57:53.321553   19304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 17:57:53.425220   19304 pod_ready.go:103] pod "coredns-7c65d6cfc9-8r5p2" in "kube-system" namespace has status "Ready":"False"
	I0915 17:57:53.521890   19304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 17:57:55.431497   19304 pod_ready.go:103] pod "coredns-7c65d6cfc9-8r5p2" in "kube-system" namespace has status "Ready":"False"
	I0915 17:57:56.023485   19304 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0915 17:57:56.023565   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:56.051965   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	I0915 17:57:56.927152   19304 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0915 17:57:57.122543   19304 addons.go:234] Setting addon gcp-auth=true in "addons-924081"
	I0915 17:57:57.122654   19304 host.go:66] Checking if "addons-924081" exists ...
	I0915 17:57:57.123205   19304 cli_runner.go:164] Run: docker container inspect addons-924081 --format={{.State.Status}}
	I0915 17:57:57.152996   19304 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0915 17:57:57.153040   19304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-924081
	I0915 17:57:57.168993   19304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/addons-924081/id_rsa Username:docker}
	I0915 17:57:57.435030   19304 pod_ready.go:103] pod "coredns-7c65d6cfc9-8r5p2" in "kube-system" namespace has status "Ready":"False"
	I0915 17:57:58.333391   19304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.90914838s)
	I0915 17:57:58.333431   19304 addons.go:475] Verifying addon ingress=true in "addons-924081"
	I0915 17:57:58.334026   19304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.690824448s)
	I0915 17:57:58.334122   19304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.610760371s)
	I0915 17:57:58.334574   19304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.714307894s)
	I0915 17:57:58.336175   19304 out.go:177] * Verifying ingress addon...
	I0915 17:57:58.339644   19304 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0915 17:57:58.346312   19304 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0915 17:57:58.346330   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:57:58.927229   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:57:59.347543   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:57:59.850926   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:57:59.928610   19304 pod_ready.go:103] pod "coredns-7c65d6cfc9-8r5p2" in "kube-system" namespace has status "Ready":"False"
	I0915 17:58:00.345775   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:00.845038   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:01.433087   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:01.441534   19304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.712856931s)
	I0915 17:58:01.441647   19304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.70912328s)
	I0915 17:58:01.441715   19304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.310962423s)
	I0915 17:58:01.441876   19304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.812551205s)
	I0915 17:58:01.441899   19304 addons.go:475] Verifying addon registry=true in "addons-924081"
	I0915 17:58:01.441937   19304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.606424958s)
	I0915 17:58:01.442018   19304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.915448408s)
	I0915 17:58:01.442040   19304 addons.go:475] Verifying addon metrics-server=true in "addons-924081"
	I0915 17:58:01.442097   19304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.820047775s)
	I0915 17:58:01.442227   19304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.812932385s)
	W0915 17:58:01.442276   19304 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 17:58:01.442304   19304 retry.go:31] will retry after 352.53232ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 17:58:01.521275   19304 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-924081 service yakd-dashboard -n yakd-dashboard
	
	I0915 17:58:01.521393   19304 out.go:177] * Verifying registry addon...
	I0915 17:58:01.525374   19304 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0915 17:58:01.528994   19304 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 17:58:01.529018   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:01.795447   19304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 17:58:01.922482   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:02.043257   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:02.349812   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:02.430039   19304 pod_ready.go:103] pod "coredns-7c65d6cfc9-8r5p2" in "kube-system" namespace has status "Ready":"False"
	I0915 17:58:02.530733   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:02.540466   19304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.218762454s)
	I0915 17:58:02.540546   19304 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-924081"
	I0915 17:58:02.540819   19304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.018887808s)
	I0915 17:58:02.540873   19304 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.38785262s)
	I0915 17:58:02.542420   19304 out.go:177] * Verifying csi-hostpath-driver addon...
	I0915 17:58:02.542424   19304 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0915 17:58:02.544319   19304 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 17:58:02.545678   19304 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0915 17:58:02.547011   19304 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0915 17:58:02.547038   19304 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0915 17:58:02.631803   19304 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 17:58:02.631894   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:02.648707   19304 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0915 17:58:02.648791   19304 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0915 17:58:02.755620   19304 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 17:58:02.755648   19304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0915 17:58:02.837137   19304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 17:58:02.844630   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:03.030295   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:03.050794   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:03.344008   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:03.529892   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:03.550241   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:03.843607   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:04.029461   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:04.131316   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:04.331231   19304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.535731057s)
	I0915 17:58:04.347743   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:04.445050   19304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.607863736s)
	I0915 17:58:04.447158   19304 addons.go:475] Verifying addon gcp-auth=true in "addons-924081"
	I0915 17:58:04.448901   19304 out.go:177] * Verifying gcp-auth addon...
	I0915 17:58:04.451469   19304 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0915 17:58:04.453862   19304 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0915 17:58:04.529576   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:04.550285   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:04.844290   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:04.852073   19304 pod_ready.go:103] pod "coredns-7c65d6cfc9-8r5p2" in "kube-system" namespace has status "Ready":"False"
	I0915 17:58:05.028651   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:05.050048   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:05.344078   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:05.555800   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:05.556652   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:05.845212   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:06.029253   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:06.051076   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:06.344091   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:06.529394   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:06.549877   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:06.844147   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:07.028919   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:07.052592   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:07.343387   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:07.351660   19304 pod_ready.go:103] pod "coredns-7c65d6cfc9-8r5p2" in "kube-system" namespace has status "Ready":"False"
	I0915 17:58:07.556879   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:07.557759   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:07.844228   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:08.029170   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:08.048910   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:08.343676   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:08.529817   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:08.550247   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:08.843867   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:09.028392   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:09.049318   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:09.343021   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:09.528168   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:09.550104   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:09.843773   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:09.850621   19304 pod_ready.go:103] pod "coredns-7c65d6cfc9-8r5p2" in "kube-system" namespace has status "Ready":"False"
	I0915 17:58:10.029177   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:10.050842   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:10.343803   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:10.528987   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:10.549982   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:10.843825   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:11.028817   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:11.050376   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:11.343766   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:11.556726   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:11.557062   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:11.843830   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:11.851305   19304 pod_ready.go:103] pod "coredns-7c65d6cfc9-8r5p2" in "kube-system" namespace has status "Ready":"False"
	I0915 17:58:12.029827   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:12.050448   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:12.343815   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:12.529478   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:12.549765   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:12.843620   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:13.029553   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:13.049277   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:13.344252   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:13.528468   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:13.549570   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:13.843756   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:13.851929   19304 pod_ready.go:103] pod "coredns-7c65d6cfc9-8r5p2" in "kube-system" namespace has status "Ready":"False"
	I0915 17:58:14.028943   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:14.050363   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:14.344079   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:14.529058   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:14.550859   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:14.843955   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:15.029611   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:15.050167   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:15.343981   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:15.528577   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:15.550635   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:15.890542   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:15.901327   19304 pod_ready.go:103] pod "coredns-7c65d6cfc9-8r5p2" in "kube-system" namespace has status "Ready":"False"
	I0915 17:58:16.028709   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:16.049657   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:16.343375   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:16.528622   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:16.550342   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:16.844062   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:17.028543   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:17.049543   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:17.343010   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:17.528744   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:17.549675   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:17.843282   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:18.029065   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:18.050107   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:18.343749   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:18.351222   19304 pod_ready.go:103] pod "coredns-7c65d6cfc9-8r5p2" in "kube-system" namespace has status "Ready":"False"
	I0915 17:58:18.529314   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:18.549779   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:18.843610   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:19.028574   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:19.050168   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:19.345370   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:19.529692   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:19.550527   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:19.844585   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:20.029403   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:20.050235   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:20.344542   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:20.351499   19304 pod_ready.go:103] pod "coredns-7c65d6cfc9-8r5p2" in "kube-system" namespace has status "Ready":"False"
	I0915 17:58:20.528739   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:20.549952   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:20.843236   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:21.028606   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:21.049660   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:21.343877   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:21.528734   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:21.550476   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:21.844019   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:22.056138   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:22.056764   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:22.344254   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:22.528970   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:22.550459   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:22.844293   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:22.851305   19304 pod_ready.go:103] pod "coredns-7c65d6cfc9-8r5p2" in "kube-system" namespace has status "Ready":"False"
	I0915 17:58:23.055759   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:23.056368   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:23.343527   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:23.529297   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:23.550283   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:23.843890   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:24.028282   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:24.049316   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:24.344365   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:24.529085   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:24.550572   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:24.844131   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:24.851793   19304 pod_ready.go:103] pod "coredns-7c65d6cfc9-8r5p2" in "kube-system" namespace has status "Ready":"False"
	I0915 17:58:25.029023   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:25.050261   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:25.344389   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:25.528782   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:25.549415   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:25.842971   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:26.029006   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:26.049767   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:26.343456   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:26.350879   19304 pod_ready.go:93] pod "coredns-7c65d6cfc9-8r5p2" in "kube-system" namespace has status "Ready":"True"
	I0915 17:58:26.350897   19304 pod_ready.go:82] duration metric: took 35.004917128s for pod "coredns-7c65d6cfc9-8r5p2" in "kube-system" namespace to be "Ready" ...
	I0915 17:58:26.350908   19304 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nrsn5" in "kube-system" namespace to be "Ready" ...
	I0915 17:58:26.352279   19304 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-nrsn5" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-nrsn5" not found
	I0915 17:58:26.352296   19304 pod_ready.go:82] duration metric: took 1.381723ms for pod "coredns-7c65d6cfc9-nrsn5" in "kube-system" namespace to be "Ready" ...
	E0915 17:58:26.352304   19304 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-nrsn5" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-nrsn5" not found
	I0915 17:58:26.352311   19304 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-924081" in "kube-system" namespace to be "Ready" ...
	I0915 17:58:26.355863   19304 pod_ready.go:93] pod "etcd-addons-924081" in "kube-system" namespace has status "Ready":"True"
	I0915 17:58:26.355878   19304 pod_ready.go:82] duration metric: took 3.56196ms for pod "etcd-addons-924081" in "kube-system" namespace to be "Ready" ...
	I0915 17:58:26.355886   19304 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-924081" in "kube-system" namespace to be "Ready" ...
	I0915 17:58:26.359162   19304 pod_ready.go:93] pod "kube-apiserver-addons-924081" in "kube-system" namespace has status "Ready":"True"
	I0915 17:58:26.359178   19304 pod_ready.go:82] duration metric: took 3.286396ms for pod "kube-apiserver-addons-924081" in "kube-system" namespace to be "Ready" ...
	I0915 17:58:26.359189   19304 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-924081" in "kube-system" namespace to be "Ready" ...
	I0915 17:58:26.362373   19304 pod_ready.go:93] pod "kube-controller-manager-addons-924081" in "kube-system" namespace has status "Ready":"True"
	I0915 17:58:26.362388   19304 pod_ready.go:82] duration metric: took 3.192079ms for pod "kube-controller-manager-addons-924081" in "kube-system" namespace to be "Ready" ...
	I0915 17:58:26.362399   19304 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-djh4b" in "kube-system" namespace to be "Ready" ...
	I0915 17:58:26.528663   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:26.549737   19304 pod_ready.go:93] pod "kube-proxy-djh4b" in "kube-system" namespace has status "Ready":"True"
	I0915 17:58:26.549763   19304 pod_ready.go:82] duration metric: took 187.35433ms for pod "kube-proxy-djh4b" in "kube-system" namespace to be "Ready" ...
	I0915 17:58:26.549775   19304 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-924081" in "kube-system" namespace to be "Ready" ...
	I0915 17:58:26.549920   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:26.843646   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:26.950311   19304 pod_ready.go:93] pod "kube-scheduler-addons-924081" in "kube-system" namespace has status "Ready":"True"
	I0915 17:58:26.950333   19304 pod_ready.go:82] duration metric: took 400.50753ms for pod "kube-scheduler-addons-924081" in "kube-system" namespace to be "Ready" ...
	I0915 17:58:26.950342   19304 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-nhvqc" in "kube-system" namespace to be "Ready" ...
	I0915 17:58:27.029166   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:27.049046   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:27.342901   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:27.349261   19304 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-nhvqc" in "kube-system" namespace has status "Ready":"True"
	I0915 17:58:27.349283   19304 pod_ready.go:82] duration metric: took 398.933608ms for pod "nvidia-device-plugin-daemonset-nhvqc" in "kube-system" namespace to be "Ready" ...
	I0915 17:58:27.349294   19304 pod_ready.go:39] duration metric: took 36.020106565s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 17:58:27.349348   19304 api_server.go:52] waiting for apiserver process to appear ...
	I0915 17:58:27.349415   19304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 17:58:27.362636   19304 api_server.go:72] duration metric: took 38.496810723s to wait for apiserver process to appear ...
	I0915 17:58:27.362664   19304 api_server.go:88] waiting for apiserver healthz status ...
	I0915 17:58:27.362681   19304 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0915 17:58:27.366156   19304 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0915 17:58:27.367018   19304 api_server.go:141] control plane version: v1.31.1
	I0915 17:58:27.367039   19304 api_server.go:131] duration metric: took 4.370242ms to wait for apiserver health ...
	I0915 17:58:27.367046   19304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 17:58:27.529146   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:27.550009   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:27.556476   19304 system_pods.go:59] 18 kube-system pods found
	I0915 17:58:27.556504   19304 system_pods.go:61] "coredns-7c65d6cfc9-8r5p2" [1956ba7b-4677-4f65-9946-3ecb8a5db57b] Running
	I0915 17:58:27.556512   19304 system_pods.go:61] "csi-hostpath-attacher-0" [d9e70213-5a19-4ff0-a34c-aa58ece1ff75] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0915 17:58:27.556519   19304 system_pods.go:61] "csi-hostpath-resizer-0" [4abcdbf1-e874-41c7-9d35-a1745bb07c16] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0915 17:58:27.556526   19304 system_pods.go:61] "csi-hostpathplugin-86wk4" [4cb1592c-4b1d-42bc-969c-43e9937ec9b8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0915 17:58:27.556532   19304 system_pods.go:61] "etcd-addons-924081" [083f6778-402e-4aa4-a4db-3e8873271227] Running
	I0915 17:58:27.556537   19304 system_pods.go:61] "kube-apiserver-addons-924081" [620085ab-d25f-4483-8ef3-31b69a03c7db] Running
	I0915 17:58:27.556543   19304 system_pods.go:61] "kube-controller-manager-addons-924081" [370a7ed1-3f52-41c3-96e7-3cece8bc350a] Running
	I0915 17:58:27.556549   19304 system_pods.go:61] "kube-ingress-dns-minikube" [661ba882-e028-4cdd-bb37-8ee95de61c69] Running
	I0915 17:58:27.556554   19304 system_pods.go:61] "kube-proxy-djh4b" [06adfc98-36ba-4500-b6a0-2887a8b024b3] Running
	I0915 17:58:27.556559   19304 system_pods.go:61] "kube-scheduler-addons-924081" [d8141836-e57b-4f9c-91d3-c8228b01a81e] Running
	I0915 17:58:27.556564   19304 system_pods.go:61] "metrics-server-84c5f94fbc-g29nd" [d0a4650f-3b55-4081-b127-353cca2c9570] Running
	I0915 17:58:27.556572   19304 system_pods.go:61] "nvidia-device-plugin-daemonset-nhvqc" [47a2d060-ff2d-4161-9188-f26d8cb11aa1] Running
	I0915 17:58:27.556576   19304 system_pods.go:61] "registry-66c9cd494c-85p89" [7ddd4a6c-0bb9-4cdd-b2c2-6a358cc36131] Running
	I0915 17:58:27.556584   19304 system_pods.go:61] "registry-proxy-lrwnn" [727ad348-b4a0-40a9-a423-cac288b38182] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0915 17:58:27.556595   19304 system_pods.go:61] "snapshot-controller-56fcc65765-kr6b5" [1ceebd9b-a2ce-49df-8214-2a12a64b390e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 17:58:27.556605   19304 system_pods.go:61] "snapshot-controller-56fcc65765-zrwhx" [aee77165-1ded-4628-8591-f325e4697bbf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 17:58:27.556612   19304 system_pods.go:61] "storage-provisioner" [b92320a2-149a-4ffe-9494-baa89c63524d] Running
	I0915 17:58:27.556617   19304 system_pods.go:61] "tiller-deploy-b48cc5f79-8kwdn" [44490bdf-edf8-403c-a16b-77e4a27b2aca] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0915 17:58:27.556625   19304 system_pods.go:74] duration metric: took 189.574073ms to wait for pod list to return data ...
	I0915 17:58:27.556633   19304 default_sa.go:34] waiting for default service account to be created ...
	I0915 17:58:27.749146   19304 default_sa.go:45] found service account: "default"
	I0915 17:58:27.749168   19304 default_sa.go:55] duration metric: took 192.529995ms for default service account to be created ...
	I0915 17:58:27.749177   19304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 17:58:27.843251   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:27.956624   19304 system_pods.go:86] 18 kube-system pods found
	I0915 17:58:27.956652   19304 system_pods.go:89] "coredns-7c65d6cfc9-8r5p2" [1956ba7b-4677-4f65-9946-3ecb8a5db57b] Running
	I0915 17:58:27.956664   19304 system_pods.go:89] "csi-hostpath-attacher-0" [d9e70213-5a19-4ff0-a34c-aa58ece1ff75] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0915 17:58:27.956672   19304 system_pods.go:89] "csi-hostpath-resizer-0" [4abcdbf1-e874-41c7-9d35-a1745bb07c16] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0915 17:58:27.956683   19304 system_pods.go:89] "csi-hostpathplugin-86wk4" [4cb1592c-4b1d-42bc-969c-43e9937ec9b8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0915 17:58:27.956690   19304 system_pods.go:89] "etcd-addons-924081" [083f6778-402e-4aa4-a4db-3e8873271227] Running
	I0915 17:58:27.956699   19304 system_pods.go:89] "kube-apiserver-addons-924081" [620085ab-d25f-4483-8ef3-31b69a03c7db] Running
	I0915 17:58:27.956708   19304 system_pods.go:89] "kube-controller-manager-addons-924081" [370a7ed1-3f52-41c3-96e7-3cece8bc350a] Running
	I0915 17:58:27.956719   19304 system_pods.go:89] "kube-ingress-dns-minikube" [661ba882-e028-4cdd-bb37-8ee95de61c69] Running
	I0915 17:58:27.956724   19304 system_pods.go:89] "kube-proxy-djh4b" [06adfc98-36ba-4500-b6a0-2887a8b024b3] Running
	I0915 17:58:27.956733   19304 system_pods.go:89] "kube-scheduler-addons-924081" [d8141836-e57b-4f9c-91d3-c8228b01a81e] Running
	I0915 17:58:27.956739   19304 system_pods.go:89] "metrics-server-84c5f94fbc-g29nd" [d0a4650f-3b55-4081-b127-353cca2c9570] Running
	I0915 17:58:27.956745   19304 system_pods.go:89] "nvidia-device-plugin-daemonset-nhvqc" [47a2d060-ff2d-4161-9188-f26d8cb11aa1] Running
	I0915 17:58:27.956750   19304 system_pods.go:89] "registry-66c9cd494c-85p89" [7ddd4a6c-0bb9-4cdd-b2c2-6a358cc36131] Running
	I0915 17:58:27.956759   19304 system_pods.go:89] "registry-proxy-lrwnn" [727ad348-b4a0-40a9-a423-cac288b38182] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0915 17:58:27.956765   19304 system_pods.go:89] "snapshot-controller-56fcc65765-kr6b5" [1ceebd9b-a2ce-49df-8214-2a12a64b390e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 17:58:27.956778   19304 system_pods.go:89] "snapshot-controller-56fcc65765-zrwhx" [aee77165-1ded-4628-8591-f325e4697bbf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 17:58:27.956784   19304 system_pods.go:89] "storage-provisioner" [b92320a2-149a-4ffe-9494-baa89c63524d] Running
	I0915 17:58:27.956795   19304 system_pods.go:89] "tiller-deploy-b48cc5f79-8kwdn" [44490bdf-edf8-403c-a16b-77e4a27b2aca] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0915 17:58:27.956808   19304 system_pods.go:126] duration metric: took 207.62584ms to wait for k8s-apps to be running ...
	I0915 17:58:27.956818   19304 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 17:58:27.956867   19304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 17:58:27.971680   19304 system_svc.go:56] duration metric: took 14.853041ms WaitForService to wait for kubelet
	I0915 17:58:27.971712   19304 kubeadm.go:582] duration metric: took 39.105889227s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 17:58:27.971736   19304 node_conditions.go:102] verifying NodePressure condition ...
	I0915 17:58:28.028992   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:28.050327   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:28.150240   19304 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0915 17:58:28.150275   19304 node_conditions.go:123] node cpu capacity is 8
	I0915 17:58:28.150292   19304 node_conditions.go:105] duration metric: took 178.550174ms to run NodePressure ...
	I0915 17:58:28.150306   19304 start.go:241] waiting for startup goroutines ...
	I0915 17:58:28.344166   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:28.529010   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:28.550336   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:28.843787   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:29.029638   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:29.050210   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:29.344539   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:29.529581   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:29.550277   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:29.844452   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:30.032032   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:30.049850   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:30.343431   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:30.529707   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:30.549480   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:30.843095   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:31.055511   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 17:58:31.056032   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:31.343735   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:31.529003   19304 kapi.go:107] duration metric: took 30.003631074s to wait for kubernetes.io/minikube-addons=registry ...
	I0915 17:58:31.550085   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:31.844241   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:32.049958   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:32.344342   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:32.549220   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:32.844891   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:33.050116   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:33.344323   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:33.550324   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:33.844337   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:34.050204   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:34.343414   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:34.556590   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:34.843643   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:35.049921   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:35.343858   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:35.549615   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:35.844518   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:36.050359   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:36.343893   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:36.550268   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:36.844655   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:37.049956   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:37.343540   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:37.550657   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:37.844885   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:38.049400   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:38.343861   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:38.549688   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:38.843350   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:39.050517   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:39.344598   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:39.549791   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:39.843996   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:40.050218   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:40.344034   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:40.550641   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:40.843588   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:41.056010   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:41.343848   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:41.550510   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:41.844034   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:42.050530   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:42.362714   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:42.623978   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:42.844113   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:43.049863   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:43.342906   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:43.549853   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:43.843982   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:44.050347   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:44.343788   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:44.550666   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:44.844949   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:45.050206   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:45.343586   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:45.549947   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:45.843875   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:46.052222   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:46.343926   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:46.550212   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:46.843942   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:47.050092   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:47.343649   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:47.608262   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:47.843971   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:48.049987   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:48.343494   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:48.551040   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:48.843718   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:49.050628   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:49.343762   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:49.550896   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:49.843467   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:50.114434   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:50.344135   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:50.551442   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:50.844666   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:51.050832   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:51.343415   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:51.550185   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:51.844604   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:52.050637   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:52.344158   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:52.549497   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:52.845006   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:53.057238   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:53.344502   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:53.550080   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:53.844458   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:54.050074   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:54.343296   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:54.556998   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:54.844732   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:55.050405   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:55.343887   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:55.567009   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:55.843933   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:56.050506   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:56.344468   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:56.550055   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:56.844250   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:57.057673   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:57.360365   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:57.550041   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:57.844656   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:58.050306   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:58.344484   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:58.557370   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:58.844609   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:59.050615   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 17:58:59.343389   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:58:59.550156   19304 kapi.go:107] duration metric: took 57.004478741s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0915 17:58:59.843691   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:59:00.342865   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:59:00.843087   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:59:01.343410   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:59:01.844106   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:59:02.343477   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:59:02.844488   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:59:03.344031   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:59:03.843891   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:59:04.344403   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:59:04.844708   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:59:05.345487   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:59:05.844082   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:59:06.343583   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:59:06.844551   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:59:07.343415   19304 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 17:59:07.845794   19304 kapi.go:107] duration metric: took 1m9.506150381s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0915 17:59:27.954409   19304 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0915 17:59:27.954433   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:28.454426   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:28.954535   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:29.454742   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:29.954551   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:30.454348   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:30.955265   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:31.455499   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:31.954455   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:32.454102   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:32.955689   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:33.454789   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:33.955063   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:34.455187   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:34.954447   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:35.454532   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:35.954422   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:36.454288   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:36.955508   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:37.454526   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:37.954607   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:38.454384   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:38.955643   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:39.454934   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:39.954423   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:40.454014   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:40.954838   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:41.455095   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:41.954934   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:42.454436   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:42.954270   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:43.455288   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:43.955377   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:44.454218   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:44.955023   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:45.455057   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:45.954929   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:46.454690   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:46.955001   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:47.454914   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:47.955181   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:48.454965   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:48.954450   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:49.454792   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:49.954579   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:50.454214   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:50.955554   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:51.454856   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:51.954816   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:52.454502   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:52.954343   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:53.454547   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:53.956702   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:54.454951   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:54.955279   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:55.454350   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:55.955196   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:56.454952   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:56.954777   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:57.455206   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:57.954804   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:58.454490   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:58.954371   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:59.454673   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 17:59:59.954476   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:00.454140   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:00.955103   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:01.454846   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:01.954437   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:02.454174   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:02.955408   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:03.454710   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:03.954486   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:04.454604   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:04.954628   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:05.454771   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:05.954464   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:06.454135   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:06.954654   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:07.454828   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:07.956250   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:08.455022   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:08.955239   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:09.456112   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:09.954646   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:10.454781   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:10.954736   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:11.454599   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:11.954969   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:12.454646   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:12.954837   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:13.454936   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:13.954503   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:14.454655   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:14.954999   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:15.455247   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:15.955099   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:16.455480   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:16.955115   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:17.454919   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:17.954912   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:18.454442   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:18.954385   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:19.454903   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:19.955098   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:20.454725   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:20.954627   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:21.454576   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:21.954507   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:22.454304   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:22.955220   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:23.455451   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:23.954851   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:24.454941   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:24.955173   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:25.455174   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:25.954956   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:26.454954   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:26.955286   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:27.455796   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:27.953991   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:28.454472   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:28.955298   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:29.454480   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:29.953986   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:30.454737   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:30.954655   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:31.454702   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:31.954718   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:32.454633   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:32.955861   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:33.455192   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:33.954702   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:34.455050   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:34.954490   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:35.454527   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:35.954492   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:36.454381   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:36.955138   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:37.455721   19304 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 18:00:37.955002   19304 kapi.go:107] duration metric: took 2m33.503531422s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0915 18:00:37.956862   19304 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-924081 cluster.
	I0915 18:00:37.958378   19304 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0915 18:00:37.959887   19304 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0915 18:00:37.961345   19304 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, volcano, cloud-spanner, metrics-server, helm-tiller, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0915 18:00:37.962788   19304 addons.go:510] duration metric: took 2m49.096897282s for enable addons: enabled=[storage-provisioner nvidia-device-plugin ingress-dns storage-provisioner-rancher volcano cloud-spanner metrics-server helm-tiller yakd default-storageclass inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0915 18:00:37.962840   19304 start.go:246] waiting for cluster config update ...
	I0915 18:00:37.962870   19304 start.go:255] writing updated cluster config ...
	I0915 18:00:37.963178   19304 ssh_runner.go:195] Run: rm -f paused
	I0915 18:00:38.013286   19304 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 18:00:38.015166   19304 out.go:177] * Done! kubectl is now configured to use "addons-924081" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 15 18:09:59 addons-924081 dockerd[1342]: time="2024-09-15T18:09:59.645231356Z" level=info msg="ignoring event" container=6d0fadb9deebfd14b70831ef8db3b6ad56c37cd194f986e5ea2c86eb06b14836 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:09:59 addons-924081 dockerd[1342]: time="2024-09-15T18:09:59.829302639Z" level=info msg="ignoring event" container=652ef040ff28a424626a4bbcb186d80e79d14cc75da789f95bb0231734d997e9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:09:59 addons-924081 dockerd[1342]: time="2024-09-15T18:09:59.839170418Z" level=info msg="ignoring event" container=bf4bc0c5191c154f4502856c6f49c47cedc23e7335274a36881735ec8002aa60 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:10:00 addons-924081 dockerd[1342]: time="2024-09-15T18:10:00.034128742Z" level=info msg="ignoring event" container=68fbf1b0779f8f23541458bad09664c3b3d553a264ae628c77bdff20e7dfad08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:10:00 addons-924081 dockerd[1342]: time="2024-09-15T18:10:00.635822841Z" level=info msg="ignoring event" container=debbf53b4ab3326b68cdb1d6a063a27edaa92afb5e669466b7659f1dca714a70 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:10:00 addons-924081 dockerd[1342]: time="2024-09-15T18:10:00.649568484Z" level=info msg="ignoring event" container=4f9ed26676baea430f41a447a86c0c60b5521a4b29e2f5b87b26743fca58989f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:10:00 addons-924081 dockerd[1342]: time="2024-09-15T18:10:00.818651884Z" level=info msg="ignoring event" container=9dda5677134430853c93af4fc0ece6b20af4f3ea92c1ae937eca2d8f300045a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:10:00 addons-924081 dockerd[1342]: time="2024-09-15T18:10:00.861345895Z" level=info msg="ignoring event" container=99f851b499b502039ae4b1c74084fde375e9bae7e21c30021818afbacf86ba31 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:10:01 addons-924081 cri-dockerd[1607]: time="2024-09-15T18:10:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc929ba652317e6d6c550ee97115acc2fd87e73ba7ddd9b866e7e8133d6d894d/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 15 18:10:02 addons-924081 dockerd[1342]: time="2024-09-15T18:10:02.173527531Z" level=warning msg="reference for unknown type: " digest="sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971" remote="ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971"
	Sep 15 18:10:05 addons-924081 dockerd[1342]: time="2024-09-15T18:10:05.384044956Z" level=info msg="ignoring event" container=4159ab2ccb2e3aeb85d1eac68c5e98f6e1a429bebef74f96b99c23d3cf5b2f5f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:10:11 addons-924081 cri-dockerd[1607]: time="2024-09-15T18:10:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/efce7c5474bd6ec9e23d209fd820f6a0caa9fc6105e69c9893911aaf50336e58/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 15 18:10:12 addons-924081 cri-dockerd[1607]: time="2024-09-15T18:10:12Z" level=info msg="Pulling image ghcr.io/headlamp-k8s/headlamp:v0.25.0@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971: c9712956fa89: Downloading [============================>                      ]  19.54MB/33.82MB"
	Sep 15 18:10:20 addons-924081 cri-dockerd[1607]: time="2024-09-15T18:10:20Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.25.0@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971"
	Sep 15 18:10:20 addons-924081 dockerd[1342]: time="2024-09-15T18:10:20.337656414Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 15 18:10:20 addons-924081 dockerd[1342]: time="2024-09-15T18:10:20.409416814Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 15 18:10:23 addons-924081 cri-dockerd[1607]: time="2024-09-15T18:10:23Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Sep 15 18:10:30 addons-924081 dockerd[1342]: time="2024-09-15T18:10:30.416178833Z" level=info msg="ignoring event" container=694ac69cd8f251d649349c8414fae36813e5a1524b509d0550e0cd7cbd773f9b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:10:30 addons-924081 cri-dockerd[1607]: time="2024-09-15T18:10:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f0bae8633a79b0cafaab23533e340bb9eb4ea231a7c7a6db3b6d2a7252dcd628/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 15 18:10:31 addons-924081 dockerd[1342]: time="2024-09-15T18:10:31.035912607Z" level=info msg="ignoring event" container=99c979201ef97c6c87baf1c857f890b602203a87ec0726983cbb938e98f2ce6c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:10:31 addons-924081 dockerd[1342]: time="2024-09-15T18:10:31.131956765Z" level=info msg="ignoring event" container=86acdab2983506e5442611ad9ce6fdc9c6ee456f465bb913b0fed4e6390161f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:10:31 addons-924081 dockerd[1342]: time="2024-09-15T18:10:31.189807192Z" level=info msg="ignoring event" container=71985211be19f5926e075487d7f21b805b3edd6f00b346d8e924adee636fc916 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:10:31 addons-924081 dockerd[1342]: time="2024-09-15T18:10:31.323282482Z" level=info msg="ignoring event" container=466f137f35e637dbae2f34b6d117ccb4d09869177a3fcd59e37ac80bee303505 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:10:31 addons-924081 dockerd[1342]: time="2024-09-15T18:10:31.451131912Z" level=info msg="ignoring event" container=6d1e4de06db8f2f6fabcfdf8fb7586505ba8cc6f134c275294b36f4e8ad49fac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:10:31 addons-924081 dockerd[1342]: time="2024-09-15T18:10:31.500116116Z" level=info msg="ignoring event" container=eb4793a100e16552c35fed357b2d2fe67ce964e70632c742091ad567711c4399 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ef93d4aa88788       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                9 seconds ago       Running             nginx                     0                   efce7c5474bd6       nginx
	52d9e76e8846c       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                        12 seconds ago      Running             headlamp                  0                   fc929ba652317       headlamp-57fb76fcdb-pcmm5
	05408f6b2b993       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   ba961def1a2bd       gcp-auth-89d5ffd79-d9sgx
	aaa57b3f73755       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                0                   3a9ef92e1c7c3       ingress-nginx-controller-bc57996ff-vr8pl
	ed27aad05ae80       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   593a0a466fabb       ingress-nginx-admission-patch-qdf6l
	a13bca916718f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   c44a704d322bb       ingress-nginx-admission-create-7pvj5
	182d5cd1be62d       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   e9d0f91b1ef6d       storage-provisioner
	f3fdc5a30a919       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                   0                   67261b45a9146       coredns-7c65d6cfc9-8r5p2
	305fd1c940590       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                0                   d43465abd9705       kube-proxy-djh4b
	66f12aeb790eb       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver            0                   77fe25ea23941       kube-apiserver-addons-924081
	9f19c328253e6       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   4f2c562feff0f       etcd-addons-924081
	ee0638f0200f9       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   1607b21ed569e       kube-controller-manager-addons-924081
	511083959df7e       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler            0                   38555d6b4a477       kube-scheduler-addons-924081
	
	
	==> controller_ingress [aaa57b3f7375] <==
	I0915 17:59:08.665830       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0915 17:59:08.665921       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-vr8pl", UID:"ecfcd2ea-e4ba-4096-ae5f-eb89b0cd9dd4", APIVersion:"v1", ResourceVersion:"1289", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0915 18:10:10.777065       7 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0915 18:10:10.795254       7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.018s renderingIngressLength:1 renderingIngressTime:0.001s admissionTime:0.019s testedConfigurationSize:18.1kB}
	I0915 18:10:10.795290       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0915 18:10:10.798473       7 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0915 18:10:10.798649       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"6dee2574-d6b1-4bc2-8652-d1656c7b5f07", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2974", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0915 18:10:10.798867       7 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0915 18:10:10.798948       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0915 18:10:10.836201       7 controller.go:213] "Backend successfully reloaded"
	I0915 18:10:10.836429       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-vr8pl", UID:"ecfcd2ea-e4ba-4096-ae5f-eb89b0cd9dd4", APIVersion:"v1", ResourceVersion:"1289", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0915 18:10:14.133663       7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0915 18:10:14.133807       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0915 18:10:14.170097       7 controller.go:213] "Backend successfully reloaded"
	I0915 18:10:14.170289       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-vr8pl", UID:"ecfcd2ea-e4ba-4096-ae5f-eb89b0cd9dd4", APIVersion:"v1", ResourceVersion:"1289", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0915 18:10:21.530018       7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	W0915 18:10:30.442466       7 controller.go:1110] Error obtaining Endpoints for Service "kube-system/hello-world-app": no object matching key "kube-system/hello-world-app" in local store
	I0915 18:10:30.468168       7 admission.go:149] processed ingress via admission controller {testedIngressLength:2 testedIngressTime:0.026s renderingIngressLength:2 renderingIngressTime:0s admissionTime:0.026s testedConfigurationSize:26.2kB}
	I0915 18:10:30.468196       7 main.go:107] "successfully validated configuration, accepting" ingress="kube-system/example-ingress"
	I0915 18:10:30.526709       7 store.go:440] "Found valid IngressClass" ingress="kube-system/example-ingress" ingressclass="nginx"
	I0915 18:10:30.526902       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kube-system", Name:"example-ingress", UID:"1644cac0-6956-422a-85ab-ef81a36f33ad", APIVersion:"networking.k8s.io/v1", ResourceVersion:"3044", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0915 18:10:31.530220       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0915 18:10:31.578672       7 controller.go:213] "Backend successfully reloaded"
	I0915 18:10:31.579035       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-vr8pl", UID:"ecfcd2ea-e4ba-4096-ae5f-eb89b0cd9dd4", APIVersion:"v1", ResourceVersion:"1289", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	10.244.0.1 - - [15/Sep/2024:18:10:30 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.81.0" 81 0.000 [default-nginx-80] [] 10.244.0.37:80 615 0.001 200 8dbbfb156245d59455be6a1f8a0514aa
	
	
	==> coredns [f3fdc5a30a91] <==
	[INFO] 10.244.0.22:40155 - 54502 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005100416s
	[INFO] 10.244.0.22:55349 - 53091 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005365613s
	[INFO] 10.244.0.22:40155 - 41295 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005048283s
	[INFO] 10.244.0.22:57499 - 23314 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005445218s
	[INFO] 10.244.0.22:42673 - 62895 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005362245s
	[INFO] 10.244.0.22:46216 - 37926 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005473042s
	[INFO] 10.244.0.22:50828 - 46710 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003571893s
	[INFO] 10.244.0.22:37729 - 24951 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005408336s
	[INFO] 10.244.0.22:50132 - 5192 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00569195s
	[INFO] 10.244.0.22:57499 - 55638 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003046684s
	[INFO] 10.244.0.22:37729 - 37944 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00404348s
	[INFO] 10.244.0.22:46216 - 3305 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004440124s
	[INFO] 10.244.0.22:42673 - 12631 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004371989s
	[INFO] 10.244.0.22:57499 - 47270 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000206402s
	[INFO] 10.244.0.22:37729 - 39026 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000056709s
	[INFO] 10.244.0.22:55349 - 28210 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004843783s
	[INFO] 10.244.0.22:40155 - 21984 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004346055s
	[INFO] 10.244.0.22:50828 - 42891 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005009271s
	[INFO] 10.244.0.22:50132 - 9731 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004254036s
	[INFO] 10.244.0.22:50132 - 5871 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000078617s
	[INFO] 10.244.0.22:46216 - 39960 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000054386s
	[INFO] 10.244.0.22:55349 - 20118 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000044347s
	[INFO] 10.244.0.22:42673 - 28304 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000041256s
	[INFO] 10.244.0.22:40155 - 2135 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000042175s
	[INFO] 10.244.0.22:50828 - 49741 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000041548s
	
	
	==> describe nodes <==
	Name:               addons-924081
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-924081
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6b3e75bb13951e1aa9da4105a14c95c8da7f2673
	                    minikube.k8s.io/name=addons-924081
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T17_57_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-924081
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 17:57:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-924081
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 18:10:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 18:10:18 +0000   Sun, 15 Sep 2024 17:57:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 18:10:18 +0000   Sun, 15 Sep 2024 17:57:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 18:10:18 +0000   Sun, 15 Sep 2024 17:57:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 18:10:18 +0000   Sun, 15 Sep 2024 17:57:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-924081
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 eccf7df5bf9a4dc784472dbbf26344c8
	  System UUID:                7e2b3237-3df5-42a6-b42c-1a16f34283e0
	  Boot ID:                    c04e1fd2-9f8e-4626-99c0-4aa7783c27aa
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     hello-world-app-55bf9c44b4-xz47d            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  gcp-auth                    gcp-auth-89d5ffd79-d9sgx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  headlamp                    headlamp-57fb76fcdb-pcmm5                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-vr8pl    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-8r5p2                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-924081                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-924081                250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-924081       200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-djh4b                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-924081                100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             260Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-924081 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x7 over 12m)  kubelet          Node addons-924081 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-924081 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-924081 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-924081 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-924081 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-924081 event: Registered Node addons-924081 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 22 56 0b f4 44 87 08 06
	[  +2.411845] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 9b 9c 20 66 90 08 06
	[  +6.110541] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 dc f0 c5 4d 48 08 06
	[  +0.070706] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 75 de c9 64 42 08 06
	[  +0.220930] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 23 7e 80 66 f1 08 06
	[Sep15 17:59] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5e 98 84 8b 95 48 08 06
	[Sep15 18:00] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5a 85 a8 ef 88 84 08 06
	[  +0.105563] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de ab c0 e4 84 7d 08 06
	[ +29.129381] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 53 e4 fe cd a8 08 06
	[  +0.000499] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 2c 6b 33 6e 7c 08 06
	[Sep15 18:09] IPv4: martian source 10.244.0.1 from 10.244.0.35, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a 34 d0 4c bb 8f 08 06
	[Sep15 18:10] IPv4: martian source 10.244.0.37 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 98 84 8b 95 48 08 06
	[  +1.372552] IPv4: martian source 10.244.0.22 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 2c 6b 33 6e 7c 08 06
	
	
	==> etcd [9f19c328253e] <==
	{"level":"info","ts":"2024-09-15T17:57:40.057823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-15T17:57:40.057841Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-15T17:57:40.057862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-15T17:57:40.057869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-15T17:57:40.057878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-15T17:57:40.057886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-15T17:57:40.058742Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T17:57:40.059466Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T17:57:40.059470Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-924081 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T17:57:40.059490Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T17:57:40.059807Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T17:57:40.059888Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T17:57:40.060048Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T17:57:40.060126Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T17:57:40.060154Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T17:57:40.060727Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T17:57:40.060802Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T17:57:40.061592Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-15T17:57:40.061991Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-15T17:58:15.888207Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.122676ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gcp-auth/gcp-auth-89d5ffd79.17f57ca5f76c3fd9\" ","response":"range_response_count:1 size:928"}
	{"level":"info","ts":"2024-09-15T17:58:15.888319Z","caller":"traceutil/trace.go:171","msg":"trace[418854937] range","detail":"{range_begin:/registry/events/gcp-auth/gcp-auth-89d5ffd79.17f57ca5f76c3fd9; range_end:; response_count:1; response_revision:1028; }","duration":"111.272227ms","start":"2024-09-15T17:58:15.777033Z","end":"2024-09-15T17:58:15.888305Z","steps":["trace[418854937] 'range keys from in-memory index tree'  (duration: 110.997919ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T18:07:40.143589Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1913}
	{"level":"info","ts":"2024-09-15T18:07:40.168196Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1913,"took":"24.089331ms","hash":3826931549,"current-db-size-bytes":8896512,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":5033984,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-15T18:07:40.168252Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3826931549,"revision":1913,"compact-revision":-1}
	{"level":"info","ts":"2024-09-15T18:10:20.609387Z","caller":"traceutil/trace.go:171","msg":"trace[1839691361] transaction","detail":"{read_only:false; response_revision:3001; number_of_response:1; }","duration":"118.418774ms","start":"2024-09-15T18:10:20.490944Z","end":"2024-09-15T18:10:20.609362Z","steps":["trace[1839691361] 'process raft request'  (duration: 60.597159ms)","trace[1839691361] 'compare'  (duration: 57.69822ms)"],"step_count":2}
	
	
	==> gcp-auth [05408f6b2b99] <==
	2024/09/15 18:01:16 Ready to write response ...
	2024/09/15 18:09:19 Ready to marshal response ...
	2024/09/15 18:09:19 Ready to write response ...
	2024/09/15 18:09:19 Ready to marshal response ...
	2024/09/15 18:09:19 Ready to write response ...
	2024/09/15 18:09:28 Ready to marshal response ...
	2024/09/15 18:09:28 Ready to write response ...
	2024/09/15 18:09:29 Ready to marshal response ...
	2024/09/15 18:09:29 Ready to write response ...
	2024/09/15 18:09:30 Ready to marshal response ...
	2024/09/15 18:09:30 Ready to write response ...
	2024/09/15 18:09:43 Ready to marshal response ...
	2024/09/15 18:09:43 Ready to write response ...
	2024/09/15 18:09:54 Ready to marshal response ...
	2024/09/15 18:09:54 Ready to write response ...
	2024/09/15 18:10:01 Ready to marshal response ...
	2024/09/15 18:10:01 Ready to write response ...
	2024/09/15 18:10:01 Ready to marshal response ...
	2024/09/15 18:10:01 Ready to write response ...
	2024/09/15 18:10:01 Ready to marshal response ...
	2024/09/15 18:10:01 Ready to write response ...
	2024/09/15 18:10:10 Ready to marshal response ...
	2024/09/15 18:10:10 Ready to write response ...
	2024/09/15 18:10:30 Ready to marshal response ...
	2024/09/15 18:10:30 Ready to write response ...
	
	
	==> kernel <==
	 18:10:32 up 52 min,  0 users,  load average: 0.84, 0.43, 0.30
	Linux addons-924081 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [66f12aeb790e] <==
	E0915 18:09:29.992520       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 18:09:29.999594       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 18:09:30.006284       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0915 18:09:37.598366       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0915 18:09:45.006230       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0915 18:10:00.388021       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 18:10:00.388079       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 18:10:00.401084       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 18:10:00.401138       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 18:10:00.402240       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 18:10:00.402286       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 18:10:00.422938       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 18:10:00.422987       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 18:10:00.523086       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 18:10:00.523161       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0915 18:10:01.418985       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	I0915 18:10:01.481701       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.199.10"}
	W0915 18:10:01.523187       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0915 18:10:01.539463       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0915 18:10:05.340066       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0915 18:10:06.458166       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0915 18:10:10.796054       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0915 18:10:11.020774       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.39.176"}
	I0915 18:10:25.238514       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0915 18:10:30.619628       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.38.43"}
	
	
	==> kube-controller-manager [ee0638f0200f] <==
	I0915 18:10:18.517932       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0915 18:10:18.517975       1 shared_informer.go:320] Caches are synced for resource quota
	W0915 18:10:18.631560       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 18:10:18.631604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 18:10:18.823827       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0915 18:10:18.823867       1 shared_informer.go:320] Caches are synced for garbage collector
	W0915 18:10:20.318884       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 18:10:20.318934       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 18:10:21.135442       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 18:10:21.135482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 18:10:21.525323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="58.728µs"
	I0915 18:10:21.541288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="5.796058ms"
	I0915 18:10:21.541413       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="81.122µs"
	W0915 18:10:23.747598       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 18:10:23.747648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 18:10:25.808777       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 18:10:25.808822       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 18:10:30.446271       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="15.556326ms"
	I0915 18:10:30.452816       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="6.493954ms"
	I0915 18:10:30.452908       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="50.182µs"
	I0915 18:10:30.455547       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="44.841µs"
	I0915 18:10:30.950436       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="3.724µs"
	I0915 18:10:32.460138       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0915 18:10:32.462107       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="5.015µs"
	I0915 18:10:32.464544       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	
	
	==> kube-proxy [305fd1c94059] <==
	I0915 17:57:51.731198       1 server_linux.go:66] "Using iptables proxy"
	I0915 17:57:52.237960       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0915 17:57:52.238037       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 17:57:52.727695       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0915 17:57:52.727764       1 server_linux.go:169] "Using iptables Proxier"
	I0915 17:57:52.732410       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 17:57:52.732966       1 server.go:483] "Version info" version="v1.31.1"
	I0915 17:57:52.732984       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 17:57:52.734509       1 config.go:199] "Starting service config controller"
	I0915 17:57:52.734544       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 17:57:52.734587       1 config.go:105] "Starting endpoint slice config controller"
	I0915 17:57:52.734594       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 17:57:52.734610       1 config.go:328] "Starting node config controller"
	I0915 17:57:52.734624       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 17:57:52.835003       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 17:57:52.835050       1 shared_informer.go:320] Caches are synced for service config
	I0915 17:57:52.835293       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [511083959df7] <==
	E0915 17:57:41.243906       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0915 17:57:41.243906       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 17:57:41.244133       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0915 17:57:41.244168       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0915 17:57:41.244175       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 17:57:41.244192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0915 17:57:41.244157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0915 17:57:41.244199       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 17:57:41.244215       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 17:57:41.244260       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 17:57:41.244281       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0915 17:57:41.244338       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 17:57:41.244354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0915 17:57:41.244358       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 17:57:41.244307       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 17:57:41.244416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0915 17:57:41.244311       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 17:57:41.244446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 17:57:42.085652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 17:57:42.085693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 17:57:42.227113       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 17:57:42.227153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 17:57:42.384190       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 17:57:42.384231       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0915 17:57:42.842161       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 15 18:10:30 addons-924081 kubelet[2451]: I0915 18:10:30.636254    2451 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e86fc63e-1e75-477a-89d2-6c1f90366215-kube-api-access-fwfdd" (OuterVolumeSpecName: "kube-api-access-fwfdd") pod "e86fc63e-1e75-477a-89d2-6c1f90366215" (UID: "e86fc63e-1e75-477a-89d2-6c1f90366215"). InnerVolumeSpecName "kube-api-access-fwfdd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 18:10:30 addons-924081 kubelet[2451]: I0915 18:10:30.731571    2451 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fwfdd\" (UniqueName: \"kubernetes.io/projected/e86fc63e-1e75-477a-89d2-6c1f90366215-kube-api-access-fwfdd\") on node \"addons-924081\" DevicePath \"\""
	Sep 15 18:10:30 addons-924081 kubelet[2451]: I0915 18:10:30.731615    2451 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e86fc63e-1e75-477a-89d2-6c1f90366215-gcp-creds\") on node \"addons-924081\" DevicePath \"\""
	Sep 15 18:10:31 addons-924081 kubelet[2451]: I0915 18:10:31.437988    2451 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhkj2\" (UniqueName: \"kubernetes.io/projected/7ddd4a6c-0bb9-4cdd-b2c2-6a358cc36131-kube-api-access-bhkj2\") pod \"7ddd4a6c-0bb9-4cdd-b2c2-6a358cc36131\" (UID: \"7ddd4a6c-0bb9-4cdd-b2c2-6a358cc36131\") "
	Sep 15 18:10:31 addons-924081 kubelet[2451]: I0915 18:10:31.439976    2451 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ddd4a6c-0bb9-4cdd-b2c2-6a358cc36131-kube-api-access-bhkj2" (OuterVolumeSpecName: "kube-api-access-bhkj2") pod "7ddd4a6c-0bb9-4cdd-b2c2-6a358cc36131" (UID: "7ddd4a6c-0bb9-4cdd-b2c2-6a358cc36131"). InnerVolumeSpecName "kube-api-access-bhkj2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 18:10:31 addons-924081 kubelet[2451]: I0915 18:10:31.537707    2451 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e86fc63e-1e75-477a-89d2-6c1f90366215" path="/var/lib/kubelet/pods/e86fc63e-1e75-477a-89d2-6c1f90366215/volumes"
	Sep 15 18:10:31 addons-924081 kubelet[2451]: I0915 18:10:31.538236    2451 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnh8h\" (UniqueName: \"kubernetes.io/projected/727ad348-b4a0-40a9-a423-cac288b38182-kube-api-access-hnh8h\") pod \"727ad348-b4a0-40a9-a423-cac288b38182\" (UID: \"727ad348-b4a0-40a9-a423-cac288b38182\") "
	Sep 15 18:10:31 addons-924081 kubelet[2451]: I0915 18:10:31.538315    2451 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bhkj2\" (UniqueName: \"kubernetes.io/projected/7ddd4a6c-0bb9-4cdd-b2c2-6a358cc36131-kube-api-access-bhkj2\") on node \"addons-924081\" DevicePath \"\""
	Sep 15 18:10:31 addons-924081 kubelet[2451]: I0915 18:10:31.541852    2451 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/727ad348-b4a0-40a9-a423-cac288b38182-kube-api-access-hnh8h" (OuterVolumeSpecName: "kube-api-access-hnh8h") pod "727ad348-b4a0-40a9-a423-cac288b38182" (UID: "727ad348-b4a0-40a9-a423-cac288b38182"). InnerVolumeSpecName "kube-api-access-hnh8h". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 18:10:31 addons-924081 kubelet[2451]: I0915 18:10:31.639792    2451 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgmpr\" (UniqueName: \"kubernetes.io/projected/661ba882-e028-4cdd-bb37-8ee95de61c69-kube-api-access-jgmpr\") pod \"661ba882-e028-4cdd-bb37-8ee95de61c69\" (UID: \"661ba882-e028-4cdd-bb37-8ee95de61c69\") "
	Sep 15 18:10:31 addons-924081 kubelet[2451]: I0915 18:10:31.639914    2451 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hnh8h\" (UniqueName: \"kubernetes.io/projected/727ad348-b4a0-40a9-a423-cac288b38182-kube-api-access-hnh8h\") on node \"addons-924081\" DevicePath \"\""
	Sep 15 18:10:31 addons-924081 kubelet[2451]: I0915 18:10:31.644645    2451 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/661ba882-e028-4cdd-bb37-8ee95de61c69-kube-api-access-jgmpr" (OuterVolumeSpecName: "kube-api-access-jgmpr") pod "661ba882-e028-4cdd-bb37-8ee95de61c69" (UID: "661ba882-e028-4cdd-bb37-8ee95de61c69"). InnerVolumeSpecName "kube-api-access-jgmpr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 18:10:31 addons-924081 kubelet[2451]: I0915 18:10:31.728798    2451 scope.go:117] "RemoveContainer" containerID="86acdab2983506e5442611ad9ce6fdc9c6ee456f465bb913b0fed4e6390161f0"
	Sep 15 18:10:31 addons-924081 kubelet[2451]: I0915 18:10:31.740648    2451 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jgmpr\" (UniqueName: \"kubernetes.io/projected/661ba882-e028-4cdd-bb37-8ee95de61c69-kube-api-access-jgmpr\") on node \"addons-924081\" DevicePath \"\""
	Sep 15 18:10:31 addons-924081 kubelet[2451]: I0915 18:10:31.757398    2451 scope.go:117] "RemoveContainer" containerID="86acdab2983506e5442611ad9ce6fdc9c6ee456f465bb913b0fed4e6390161f0"
	Sep 15 18:10:31 addons-924081 kubelet[2451]: E0915 18:10:31.822058    2451 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 86acdab2983506e5442611ad9ce6fdc9c6ee456f465bb913b0fed4e6390161f0" containerID="86acdab2983506e5442611ad9ce6fdc9c6ee456f465bb913b0fed4e6390161f0"
	Sep 15 18:10:31 addons-924081 kubelet[2451]: I0915 18:10:31.822106    2451 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"86acdab2983506e5442611ad9ce6fdc9c6ee456f465bb913b0fed4e6390161f0"} err="failed to get container status \"86acdab2983506e5442611ad9ce6fdc9c6ee456f465bb913b0fed4e6390161f0\": rpc error: code = Unknown desc = Error response from daemon: No such container: 86acdab2983506e5442611ad9ce6fdc9c6ee456f465bb913b0fed4e6390161f0"
	Sep 15 18:10:31 addons-924081 kubelet[2451]: I0915 18:10:31.822136    2451 scope.go:117] "RemoveContainer" containerID="6d1e4de06db8f2f6fabcfdf8fb7586505ba8cc6f134c275294b36f4e8ad49fac"
	Sep 15 18:10:31 addons-924081 kubelet[2451]: I0915 18:10:31.852224    2451 scope.go:117] "RemoveContainer" containerID="6d1e4de06db8f2f6fabcfdf8fb7586505ba8cc6f134c275294b36f4e8ad49fac"
	Sep 15 18:10:31 addons-924081 kubelet[2451]: E0915 18:10:31.853282    2451 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 6d1e4de06db8f2f6fabcfdf8fb7586505ba8cc6f134c275294b36f4e8ad49fac" containerID="6d1e4de06db8f2f6fabcfdf8fb7586505ba8cc6f134c275294b36f4e8ad49fac"
	Sep 15 18:10:31 addons-924081 kubelet[2451]: I0915 18:10:31.853328    2451 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"6d1e4de06db8f2f6fabcfdf8fb7586505ba8cc6f134c275294b36f4e8ad49fac"} err="failed to get container status \"6d1e4de06db8f2f6fabcfdf8fb7586505ba8cc6f134c275294b36f4e8ad49fac\": rpc error: code = Unknown desc = Error response from daemon: No such container: 6d1e4de06db8f2f6fabcfdf8fb7586505ba8cc6f134c275294b36f4e8ad49fac"
	Sep 15 18:10:31 addons-924081 kubelet[2451]: I0915 18:10:31.853358    2451 scope.go:117] "RemoveContainer" containerID="99c979201ef97c6c87baf1c857f890b602203a87ec0726983cbb938e98f2ce6c"
	Sep 15 18:10:31 addons-924081 kubelet[2451]: I0915 18:10:31.936485    2451 scope.go:117] "RemoveContainer" containerID="99c979201ef97c6c87baf1c857f890b602203a87ec0726983cbb938e98f2ce6c"
	Sep 15 18:10:31 addons-924081 kubelet[2451]: E0915 18:10:31.937968    2451 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 99c979201ef97c6c87baf1c857f890b602203a87ec0726983cbb938e98f2ce6c" containerID="99c979201ef97c6c87baf1c857f890b602203a87ec0726983cbb938e98f2ce6c"
	Sep 15 18:10:31 addons-924081 kubelet[2451]: I0915 18:10:31.938022    2451 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"99c979201ef97c6c87baf1c857f890b602203a87ec0726983cbb938e98f2ce6c"} err="failed to get container status \"99c979201ef97c6c87baf1c857f890b602203a87ec0726983cbb938e98f2ce6c\": rpc error: code = Unknown desc = Error response from daemon: No such container: 99c979201ef97c6c87baf1c857f890b602203a87ec0726983cbb938e98f2ce6c"
	
	
	==> storage-provisioner [182d5cd1be62] <==
	I0915 17:57:56.326860       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 17:57:56.441814       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 17:57:56.441871       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 17:57:56.532148       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 17:57:56.532345       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-924081_bb71074d-2180-4a47-a26d-d7386c9afdc8!
	I0915 17:57:56.532415       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0aa87fe3-e58d-4a23-bc68-a3aee9e3d268", APIVersion:"v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-924081_bb71074d-2180-4a47-a26d-d7386c9afdc8 became leader
	I0915 17:57:56.729148       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-924081_bb71074d-2180-4a47-a26d-d7386c9afdc8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-924081 -n addons-924081
helpers_test.go:261: (dbg) Run:  kubectl --context addons-924081 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox hello-world-app-55bf9c44b4-xz47d
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-924081 describe pod busybox hello-world-app-55bf9c44b4-xz47d
helpers_test.go:282: (dbg) kubectl --context addons-924081 describe pod busybox hello-world-app-55bf9c44b4-xz47d:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-924081/192.168.49.2
	Start Time:       Sun, 15 Sep 2024 18:01:16 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x6f9c (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-x6f9c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m17s                  default-scheduler  Successfully assigned default/busybox to addons-924081
	  Normal   Pulling    7m46s (x4 over 9m16s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m46s (x4 over 9m16s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m46s (x4 over 9m16s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m34s (x6 over 9m16s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m6s (x21 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             hello-world-app-55bf9c44b4-xz47d
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-924081/192.168.49.2
	Start Time:       Sun, 15 Sep 2024 18:10:30 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t77dd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-t77dd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-xz47d to addons-924081
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     1s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 1.898s (1.898s including waiting). Image size: 4939776 bytes.
	  Normal  Created    1s    kubelet            Created container hello-world-app
	  Normal  Started    0s    kubelet            Started container hello-world-app

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (74.17s)

                                                
                                    

Test pass (322/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 20.05
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 13.57
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.4
21 TestBinaryMirror 1.6
22 TestOffline 76.55
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 213.75
29 TestAddons/serial/Volcano 38.61
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 28.98
35 TestAddons/parallel/InspektorGadget 10.61
36 TestAddons/parallel/MetricsServer 5.76
37 TestAddons/parallel/HelmTiller 11.54
39 TestAddons/parallel/CSI 41.61
40 TestAddons/parallel/Headlamp 25.98
41 TestAddons/parallel/CloudSpanner 5.45
42 TestAddons/parallel/LocalPath 53.04
43 TestAddons/parallel/NvidiaDevicePlugin 6.4
44 TestAddons/parallel/Yakd 11.62
45 TestAddons/StoppedEnableDisable 11.12
46 TestCertOptions 25.6
47 TestCertExpiration 249.13
48 TestDockerFlags 26.91
49 TestForceSystemdFlag 31.73
50 TestForceSystemdEnv 29.4
52 TestKVMDriverInstallOrUpdate 4.69
56 TestErrorSpam/setup 21.06
57 TestErrorSpam/start 0.59
58 TestErrorSpam/status 0.86
59 TestErrorSpam/pause 1.15
60 TestErrorSpam/unpause 1.37
61 TestErrorSpam/stop 10.82
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 68.83
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 35.84
68 TestFunctional/serial/KubeContext 0.05
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.41
73 TestFunctional/serial/CacheCmd/cache/add_local 1.43
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.25
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 41.91
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 0.99
84 TestFunctional/serial/LogsFileCmd 1.01
85 TestFunctional/serial/InvalidService 4
87 TestFunctional/parallel/ConfigCmd 0.4
88 TestFunctional/parallel/DashboardCmd 18.84
89 TestFunctional/parallel/DryRun 0.37
90 TestFunctional/parallel/InternationalLanguage 0.21
91 TestFunctional/parallel/StatusCmd 1.06
95 TestFunctional/parallel/ServiceCmdConnect 10.54
96 TestFunctional/parallel/AddonsCmd 0.2
97 TestFunctional/parallel/PersistentVolumeClaim 37.58
99 TestFunctional/parallel/SSHCmd 0.6
100 TestFunctional/parallel/CpCmd 1.96
101 TestFunctional/parallel/MySQL 25.89
102 TestFunctional/parallel/FileSync 0.26
103 TestFunctional/parallel/CertSync 1.59
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.26
111 TestFunctional/parallel/License 0.65
112 TestFunctional/parallel/ServiceCmd/DeployApp 9.24
114 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.5
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.26
118 TestFunctional/parallel/ServiceCmd/List 0.55
119 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
120 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
121 TestFunctional/parallel/ServiceCmd/Format 0.45
122 TestFunctional/parallel/ServiceCmd/URL 0.39
123 TestFunctional/parallel/Version/short 0.12
124 TestFunctional/parallel/Version/components 0.61
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
130 TestFunctional/parallel/ImageCommands/ImageBuild 3.86
131 TestFunctional/parallel/ImageCommands/Setup 1.9
132 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
136 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
137 TestFunctional/parallel/DockerEnv/bash 0.94
138 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
139 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
140 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
142 TestFunctional/parallel/ProfileCmd/profile_list 0.4
143 TestFunctional/parallel/MountCmd/any-port 19.89
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.99
145 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.8
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.62
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.44
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.64
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.49
152 TestFunctional/parallel/MountCmd/specific-port 2.03
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.82
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 98.49
161 TestMultiControlPlane/serial/DeployApp 6.67
162 TestMultiControlPlane/serial/PingHostFromPods 1.08
163 TestMultiControlPlane/serial/AddWorkerNode 20.45
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.65
166 TestMultiControlPlane/serial/CopyFile 15.98
167 TestMultiControlPlane/serial/StopSecondaryNode 11.46
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.49
169 TestMultiControlPlane/serial/RestartSecondaryNode 19.23
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.31
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 277.25
172 TestMultiControlPlane/serial/DeleteSecondaryNode 9.67
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.47
174 TestMultiControlPlane/serial/StopCluster 32.76
175 TestMultiControlPlane/serial/RestartCluster 85.52
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.46
177 TestMultiControlPlane/serial/AddSecondaryNode 37.33
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.64
181 TestImageBuild/serial/Setup 24.45
182 TestImageBuild/serial/NormalBuild 2.51
183 TestImageBuild/serial/BuildWithBuildArg 1.06
184 TestImageBuild/serial/BuildWithDockerIgnore 0.84
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.75
189 TestJSONOutput/start/Command 64.15
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.54
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.44
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.73
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.21
214 TestKicCustomNetwork/create_custom_network 26.94
215 TestKicCustomNetwork/use_default_bridge_network 25.82
216 TestKicExistingNetwork 25.69
217 TestKicCustomSubnet 23.54
218 TestKicStaticIP 26.55
219 TestMainNoArgs 0.04
220 TestMinikubeProfile 50.24
223 TestMountStart/serial/StartWithMountFirst 10.31
224 TestMountStart/serial/VerifyMountFirst 0.24
225 TestMountStart/serial/StartWithMountSecond 10.46
226 TestMountStart/serial/VerifyMountSecond 0.24
227 TestMountStart/serial/DeleteFirst 1.45
228 TestMountStart/serial/VerifyMountPostDelete 0.24
229 TestMountStart/serial/Stop 1.17
230 TestMountStart/serial/RestartStopped 8.36
231 TestMountStart/serial/VerifyMountPostStop 0.24
234 TestMultiNode/serial/FreshStart2Nodes 71.58
235 TestMultiNode/serial/DeployApp2Nodes 37.07
236 TestMultiNode/serial/PingHostFrom2Pods 0.7
237 TestMultiNode/serial/AddNode 14.46
238 TestMultiNode/serial/MultiNodeLabels 0.07
239 TestMultiNode/serial/ProfileList 0.33
240 TestMultiNode/serial/CopyFile 8.88
241 TestMultiNode/serial/StopNode 2.08
242 TestMultiNode/serial/StartAfterStop 9.92
243 TestMultiNode/serial/RestartKeepsNodes 94.54
244 TestMultiNode/serial/DeleteNode 5.17
245 TestMultiNode/serial/StopMultiNode 21.43
246 TestMultiNode/serial/RestartMultiNode 52.13
247 TestMultiNode/serial/ValidateNameConflict 27.57
252 TestPreload 106.24
254 TestScheduledStopUnix 95.8
255 TestSkaffold 102.74
257 TestInsufficientStorage 9.68
258 TestRunningBinaryUpgrade 62.64
260 TestKubernetesUpgrade 337.84
261 TestMissingContainerUpgrade 154.3
263 TestStoppedBinaryUpgrade/Setup 2.59
264 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
265 TestNoKubernetes/serial/StartWithK8s 29.53
266 TestStoppedBinaryUpgrade/Upgrade 148.33
267 TestNoKubernetes/serial/StartWithStopK8s 16.98
268 TestNoKubernetes/serial/Start 11.6
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
270 TestNoKubernetes/serial/ProfileList 1.03
271 TestNoKubernetes/serial/Stop 1.46
272 TestNoKubernetes/serial/StartNoArgs 8.36
273 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
285 TestStoppedBinaryUpgrade/MinikubeLogs 1.12
294 TestPause/serial/Start 38.81
295 TestNetworkPlugins/group/auto/Start 32.88
296 TestPause/serial/SecondStartNoReconfiguration 29.95
297 TestNetworkPlugins/group/auto/KubeletFlags 0.25
298 TestNetworkPlugins/group/auto/NetCatPod 8.21
299 TestPause/serial/Pause 0.55
300 TestPause/serial/VerifyStatus 0.32
301 TestPause/serial/Unpause 0.45
302 TestPause/serial/PauseAgain 0.62
303 TestPause/serial/DeletePaused 2.04
304 TestPause/serial/VerifyDeletedResources 0.69
305 TestNetworkPlugins/group/kindnet/Start 58.12
306 TestNetworkPlugins/group/auto/DNS 20.87
307 TestNetworkPlugins/group/auto/Localhost 0.12
308 TestNetworkPlugins/group/auto/HairPin 0.11
309 TestNetworkPlugins/group/calico/Start 77.91
310 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
311 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
312 TestNetworkPlugins/group/kindnet/NetCatPod 9.19
313 TestNetworkPlugins/group/kindnet/DNS 0.15
314 TestNetworkPlugins/group/kindnet/Localhost 0.14
315 TestNetworkPlugins/group/kindnet/HairPin 0.11
316 TestNetworkPlugins/group/custom-flannel/Start 51.44
317 TestNetworkPlugins/group/false/Start 67.41
318 TestNetworkPlugins/group/calico/ControllerPod 6.01
319 TestNetworkPlugins/group/enable-default-cni/Start 67.08
320 TestNetworkPlugins/group/calico/KubeletFlags 0.42
321 TestNetworkPlugins/group/calico/NetCatPod 12.35
322 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
323 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.21
324 TestNetworkPlugins/group/calico/DNS 0.13
325 TestNetworkPlugins/group/calico/Localhost 0.11
326 TestNetworkPlugins/group/calico/HairPin 0.12
327 TestNetworkPlugins/group/custom-flannel/DNS 0.13
328 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
329 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
330 TestNetworkPlugins/group/flannel/Start 47.68
331 TestNetworkPlugins/group/false/KubeletFlags 0.28
332 TestNetworkPlugins/group/false/NetCatPod 11.21
333 TestNetworkPlugins/group/bridge/Start 64.37
334 TestNetworkPlugins/group/false/DNS 0.15
335 TestNetworkPlugins/group/false/Localhost 0.14
336 TestNetworkPlugins/group/false/HairPin 0.13
337 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
338 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.34
339 TestNetworkPlugins/group/kubenet/Start 70.84
340 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
341 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
342 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
343 TestNetworkPlugins/group/flannel/ControllerPod 6.01
344 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
345 TestNetworkPlugins/group/flannel/NetCatPod 9.21
347 TestStartStop/group/old-k8s-version/serial/FirstStart 101.17
348 TestNetworkPlugins/group/flannel/DNS 0.16
349 TestNetworkPlugins/group/flannel/Localhost 0.13
350 TestNetworkPlugins/group/flannel/HairPin 0.16
351 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
352 TestNetworkPlugins/group/bridge/NetCatPod 9.21
353 TestNetworkPlugins/group/bridge/DNS 0.16
354 TestNetworkPlugins/group/bridge/Localhost 0.12
355 TestNetworkPlugins/group/bridge/HairPin 0.12
357 TestStartStop/group/no-preload/serial/FirstStart 67.48
359 TestStartStop/group/embed-certs/serial/FirstStart 67.02
360 TestNetworkPlugins/group/kubenet/KubeletFlags 0.32
361 TestNetworkPlugins/group/kubenet/NetCatPod 9.28
362 TestNetworkPlugins/group/kubenet/DNS 0.13
363 TestNetworkPlugins/group/kubenet/Localhost 0.11
364 TestNetworkPlugins/group/kubenet/HairPin 0.11
366 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 39.11
367 TestStartStop/group/no-preload/serial/DeployApp 9.26
368 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.86
369 TestStartStop/group/no-preload/serial/Stop 10.65
370 TestStartStop/group/old-k8s-version/serial/DeployApp 8.4
371 TestStartStop/group/embed-certs/serial/DeployApp 8.24
372 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.93
373 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
374 TestStartStop/group/no-preload/serial/SecondStart 263.37
375 TestStartStop/group/old-k8s-version/serial/Stop 10.86
376 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.29
377 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.86
378 TestStartStop/group/embed-certs/serial/Stop 10.9
379 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
380 TestStartStop/group/old-k8s-version/serial/SecondStart 141.43
381 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
382 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.18
383 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
384 TestStartStop/group/embed-certs/serial/SecondStart 267.87
385 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
386 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 263.66
387 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
388 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
389 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
390 TestStartStop/group/old-k8s-version/serial/Pause 2.48
392 TestStartStop/group/newest-cni/serial/FirstStart 30.09
393 TestStartStop/group/newest-cni/serial/DeployApp 0
394 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.98
395 TestStartStop/group/newest-cni/serial/Stop 10.79
396 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
397 TestStartStop/group/newest-cni/serial/SecondStart 14.61
398 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
399 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
400 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
401 TestStartStop/group/newest-cni/serial/Pause 2.75
402 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
403 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
404 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
405 TestStartStop/group/no-preload/serial/Pause 2.51
406 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
407 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
408 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
409 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
410 TestStartStop/group/embed-certs/serial/Pause 2.37
411 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
412 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
413 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.42
x
+
TestDownloadOnly/v1.20.0/json-events (20.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-600764 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-600764 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (20.051458856s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (20.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-600764
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-600764: exit status 85 (57.337466ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-600764 | jenkins | v1.34.0 | 15 Sep 24 17:56 UTC |          |
	|         | -p download-only-600764        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 17:56:26
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 17:56:26.616225   17961 out.go:345] Setting OutFile to fd 1 ...
	I0915 17:56:26.616348   17961 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 17:56:26.616358   17961 out.go:358] Setting ErrFile to fd 2...
	I0915 17:56:26.616365   17961 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 17:56:26.616537   17961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-11129/.minikube/bin
	W0915 17:56:26.616655   17961 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19648-11129/.minikube/config/config.json: open /home/jenkins/minikube-integration/19648-11129/.minikube/config/config.json: no such file or directory
	I0915 17:56:26.617203   17961 out.go:352] Setting JSON to true
	I0915 17:56:26.618061   17961 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2333,"bootTime":1726420654,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 17:56:26.618163   17961 start.go:139] virtualization: kvm guest
	I0915 17:56:26.620463   17961 out.go:97] [download-only-600764] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 17:56:26.620572   17961 notify.go:220] Checking for updates...
	W0915 17:56:26.620587   17961 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19648-11129/.minikube/cache/preloaded-tarball: no such file or directory
	I0915 17:56:26.621935   17961 out.go:169] MINIKUBE_LOCATION=19648
	I0915 17:56:26.623285   17961 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 17:56:26.624698   17961 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19648-11129/kubeconfig
	I0915 17:56:26.626028   17961 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-11129/.minikube
	I0915 17:56:26.627320   17961 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0915 17:56:26.629635   17961 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0915 17:56:26.629838   17961 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 17:56:26.650847   17961 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 17:56:26.650908   17961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 17:56:27.017886   17961 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-15 17:56:27.009219938 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 17:56:27.017980   17961 docker.go:318] overlay module found
	I0915 17:56:27.019672   17961 out.go:97] Using the docker driver based on user configuration
	I0915 17:56:27.019702   17961 start.go:297] selected driver: docker
	I0915 17:56:27.019708   17961 start.go:901] validating driver "docker" against <nil>
	I0915 17:56:27.019789   17961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 17:56:27.066347   17961 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-15 17:56:27.058213027 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 17:56:27.066508   17961 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 17:56:27.067091   17961 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0915 17:56:27.067314   17961 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 17:56:27.069253   17961 out.go:169] Using Docker driver with root privileges
	I0915 17:56:27.070364   17961 cni.go:84] Creating CNI manager for ""
	I0915 17:56:27.070437   17961 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0915 17:56:27.070526   17961 start.go:340] cluster config:
	{Name:download-only-600764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-600764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 17:56:27.071975   17961 out.go:97] Starting "download-only-600764" primary control-plane node in "download-only-600764" cluster
	I0915 17:56:27.071992   17961 cache.go:121] Beginning downloading kic base image for docker with docker
	I0915 17:56:27.073146   17961 out.go:97] Pulling base image v0.0.45-1726358845-19644 ...
	I0915 17:56:27.073166   17961 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0915 17:56:27.073204   17961 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 17:56:27.089333   17961 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 17:56:27.089519   17961 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 17:56:27.089624   17961 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 17:56:27.242013   17961 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0915 17:56:27.242041   17961 cache.go:56] Caching tarball of preloaded images
	I0915 17:56:27.242239   17961 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0915 17:56:27.244450   17961 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0915 17:56:27.244474   17961 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0915 17:56:27.352181   17961 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19648-11129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0915 17:56:37.688591   17961 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0915 17:56:37.688685   17961 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19648-11129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0915 17:56:38.475026   17961 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0915 17:56:38.475350   17961 profile.go:143] Saving config to /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/download-only-600764/config.json ...
	I0915 17:56:38.475377   17961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/download-only-600764/config.json: {Name:mk4cddb659654b1ac1a8b30388854b199e423123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 17:56:38.475545   17961 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0915 17:56:38.475702   17961 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19648-11129/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-600764 host does not exist
	  To start a cluster, run: "minikube start -p download-only-600764"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-600764
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (13.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-051440 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-051440 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (13.565376374s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (13.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-051440
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-051440: exit status 85 (58.778897ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-600764 | jenkins | v1.34.0 | 15 Sep 24 17:56 UTC |                     |
	|         | -p download-only-600764        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 15 Sep 24 17:56 UTC | 15 Sep 24 17:56 UTC |
	| delete  | -p download-only-600764        | download-only-600764 | jenkins | v1.34.0 | 15 Sep 24 17:56 UTC | 15 Sep 24 17:56 UTC |
	| start   | -o=json --download-only        | download-only-051440 | jenkins | v1.34.0 | 15 Sep 24 17:56 UTC |                     |
	|         | -p download-only-051440        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 17:56:47
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 17:56:47.044765   18349 out.go:345] Setting OutFile to fd 1 ...
	I0915 17:56:47.044858   18349 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 17:56:47.044866   18349 out.go:358] Setting ErrFile to fd 2...
	I0915 17:56:47.044869   18349 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 17:56:47.045025   18349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-11129/.minikube/bin
	I0915 17:56:47.045566   18349 out.go:352] Setting JSON to true
	I0915 17:56:47.046413   18349 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2353,"bootTime":1726420654,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 17:56:47.046502   18349 start.go:139] virtualization: kvm guest
	I0915 17:56:47.048569   18349 out.go:97] [download-only-051440] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 17:56:47.048711   18349 notify.go:220] Checking for updates...
	I0915 17:56:47.049886   18349 out.go:169] MINIKUBE_LOCATION=19648
	I0915 17:56:47.051112   18349 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 17:56:47.052296   18349 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19648-11129/kubeconfig
	I0915 17:56:47.053790   18349 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-11129/.minikube
	I0915 17:56:47.054949   18349 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0915 17:56:47.057135   18349 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0915 17:56:47.057390   18349 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 17:56:47.078426   18349 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 17:56:47.078530   18349 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 17:56:47.124386   18349 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-15 17:56:47.115383428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 17:56:47.124490   18349 docker.go:318] overlay module found
	I0915 17:56:47.126025   18349 out.go:97] Using the docker driver based on user configuration
	I0915 17:56:47.126042   18349 start.go:297] selected driver: docker
	I0915 17:56:47.126047   18349 start.go:901] validating driver "docker" against <nil>
	I0915 17:56:47.126118   18349 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 17:56:47.172896   18349 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-15 17:56:47.164706095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 17:56:47.173060   18349 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 17:56:47.173523   18349 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0915 17:56:47.173657   18349 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 17:56:47.175352   18349 out.go:169] Using Docker driver with root privileges
	I0915 17:56:47.176476   18349 cni.go:84] Creating CNI manager for ""
	I0915 17:56:47.176540   18349 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 17:56:47.176558   18349 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 17:56:47.176638   18349 start.go:340] cluster config:
	{Name:download-only-051440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-051440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 17:56:47.177862   18349 out.go:97] Starting "download-only-051440" primary control-plane node in "download-only-051440" cluster
	I0915 17:56:47.177882   18349 cache.go:121] Beginning downloading kic base image for docker with docker
	I0915 17:56:47.178943   18349 out.go:97] Pulling base image v0.0.45-1726358845-19644 ...
	I0915 17:56:47.178964   18349 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 17:56:47.179068   18349 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 17:56:47.194017   18349 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 17:56:47.194133   18349 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 17:56:47.194147   18349 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0915 17:56:47.194151   18349 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0915 17:56:47.194158   18349 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0915 17:56:47.666357   18349 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0915 17:56:47.666404   18349 cache.go:56] Caching tarball of preloaded images
	I0915 17:56:47.666552   18349 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 17:56:47.668558   18349 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0915 17:56:47.668577   18349 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0915 17:56:47.769235   18349 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /home/jenkins/minikube-integration/19648-11129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-051440 host does not exist
	  To start a cluster, run: "minikube start -p download-only-051440"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-051440
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-579642 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-579642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-579642
--- PASS: TestDownloadOnlyKic (1.40s)

                                                
                                    
x
+
TestBinaryMirror (1.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-308651 --alsologtostderr --binary-mirror http://127.0.0.1:33109 --driver=docker  --container-runtime=docker
aaa_download_only_test.go:314: (dbg) Done: out/minikube-linux-amd64 start --download-only -p binary-mirror-308651 --alsologtostderr --binary-mirror http://127.0.0.1:33109 --driver=docker  --container-runtime=docker: (1.264886172s)
helpers_test.go:175: Cleaning up "binary-mirror-308651" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-308651
--- PASS: TestBinaryMirror (1.60s)

                                                
                                    
x
+
TestOffline (76.55s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-814693 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-814693 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m12.976421732s)
helpers_test.go:175: Cleaning up "offline-docker-814693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-814693
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-814693: (3.572283587s)
--- PASS: TestOffline (76.55s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-924081
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-924081: exit status 85 (51.345342ms)

                                                
                                                
-- stdout --
	* Profile "addons-924081" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-924081"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-924081
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-924081: exit status 85 (48.919367ms)

                                                
                                                
-- stdout --
	* Profile "addons-924081" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-924081"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (213.75s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-924081 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-924081 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m33.754136106s)
--- PASS: TestAddons/Setup (213.75s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.61s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 10.951042ms
addons_test.go:905: volcano-admission stabilized in 11.002865ms
addons_test.go:897: volcano-scheduler stabilized in 11.042773ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-dlfrl" [7733490c-d8be-45dd-a411-027128609ed1] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003236793s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-jpzxr" [b1091224-d1e6-4308-af7c-047a3e5b464a] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003828887s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-gzvsw" [0a16058f-2f61-40bb-a23f-6555cd2b1899] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003649146s
addons_test.go:932: (dbg) Run:  kubectl --context addons-924081 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-924081 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-924081 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [4a8c8088-0536-4ece-8d7c-4a49362d2e96] Pending
helpers_test.go:344: "test-job-nginx-0" [4a8c8088-0536-4ece-8d7c-4a49362d2e96] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [4a8c8088-0536-4ece-8d7c-4a49362d2e96] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003248952s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-924081 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-924081 addons disable volcano --alsologtostderr -v=1: (10.242156952s)
--- PASS: TestAddons/serial/Volcano (38.61s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-924081 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-924081 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (28.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-924081 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-924081 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-924081 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [29644fe9-d59a-46c2-be16-fe000081731d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [29644fe9-d59a-46c2-be16-fe000081731d] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 19.003445723s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-924081 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-924081 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-924081 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-924081 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-924081 addons disable ingress-dns --alsologtostderr -v=1: (1.010797265s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-924081 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-924081 addons disable ingress --alsologtostderr -v=1: (7.662828671s)
--- PASS: TestAddons/parallel/Ingress (28.98s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.61s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6h5rp" [ff12a97f-82c1-45b3-867c-a7cbd6481dbb] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004226592s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-924081
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-924081: (5.605535594s)
--- PASS: TestAddons/parallel/InspektorGadget (10.61s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.76s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.302119ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-g29nd" [d0a4650f-3b55-4081-b127-353cca2c9570] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003713402s
addons_test.go:417: (dbg) Run:  kubectl --context addons-924081 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-924081 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.76s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.54s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.081565ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-8kwdn" [44490bdf-edf8-403c-a16b-77e4a27b2aca] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.004545449s
addons_test.go:475: (dbg) Run:  kubectl --context addons-924081 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-924081 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.9642953s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-924081 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.54s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 3.915253ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-924081 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-924081 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-924081 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-924081 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-924081 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-924081 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-924081 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-924081 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-924081 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-924081 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-924081 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-924081 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-924081 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [82b5ef27-9f88-4efa-b6fe-b7d67e60bccc] Pending
helpers_test.go:344: "task-pv-pod" [82b5ef27-9f88-4efa-b6fe-b7d67e60bccc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [82b5ef27-9f88-4efa-b6fe-b7d67e60bccc] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003639218s
addons_test.go:590: (dbg) Run:  kubectl --context addons-924081 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-924081 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-924081 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-924081 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-924081 delete pod task-pv-pod: (1.033152076s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-924081 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-924081 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-924081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-924081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-924081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-924081 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-924081 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f35c0196-2f9c-42e6-86a0-bb46ad29eb12] Pending
helpers_test.go:344: "task-pv-pod-restore" [f35c0196-2f9c-42e6-86a0-bb46ad29eb12] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f35c0196-2f9c-42e6-86a0-bb46ad29eb12] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003688664s
addons_test.go:632: (dbg) Run:  kubectl --context addons-924081 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-924081 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-924081 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-924081 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-924081 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.692026914s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-924081 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (41.61s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (25.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-924081 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-pcmm5" [ada31f90-a2d2-4b22-b45e-c3c32cf60992] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-pcmm5" [ada31f90-a2d2-4b22-b45e-c3c32cf60992] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 25.003260122s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-924081 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (25.98s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-b8d7f" [f992b91e-0126-4c49-980d-bf336b495641] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004001825s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-924081
--- PASS: TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.04s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-924081 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-924081 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-924081 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-924081 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-924081 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-924081 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-924081 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-924081 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3b2d03fd-435f-495b-9c8b-b1154ce2a0dc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3b2d03fd-435f-495b-9c8b-b1154ce2a0dc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3b2d03fd-435f-495b-9c8b-b1154ce2a0dc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004115641s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-924081 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-924081 ssh "cat /opt/local-path-provisioner/pvc-c592a443-dc12-4138-ba5c-46e5f18ad12e_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-924081 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-924081 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-924081 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-924081 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.134009672s)
--- PASS: TestAddons/parallel/LocalPath (53.04s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.4s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-nhvqc" [47a2d060-ff2d-4161-9188-f26d8cb11aa1] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003592665s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-924081
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.40s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-w5lsn" [a306000f-ea32-4cd0-ab62-901e8dae8e4f] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004144716s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-924081 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-924081 addons disable yakd --alsologtostderr -v=1: (5.615936101s)
--- PASS: TestAddons/parallel/Yakd (11.62s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.12s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-924081
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-924081: (10.88166706s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-924081
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-924081
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-924081
--- PASS: TestAddons/StoppedEnableDisable (11.12s)

                                                
                                    
x
+
TestCertOptions (25.6s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-154247 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-154247 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (22.881560553s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-154247 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-154247 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-154247 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-154247" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-154247
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-154247: (2.122598918s)
--- PASS: TestCertOptions (25.60s)

                                                
                                    
x
+
TestCertExpiration (249.13s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-480001 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-480001 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (28.871419054s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-480001 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-480001 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (37.911033815s)
helpers_test.go:175: Cleaning up "cert-expiration-480001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-480001
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-480001: (2.349424138s)
--- PASS: TestCertExpiration (249.13s)

                                                
                                    
x
+
TestDockerFlags (26.91s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-622664 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-622664 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (24.353070762s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-622664 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-622664 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-622664" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-622664
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-622664: (2.01211031s)
--- PASS: TestDockerFlags (26.91s)

                                                
                                    
x
+
TestForceSystemdFlag (31.73s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-856512 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-856512 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (29.653503356s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-856512 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-856512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-856512
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-856512: (1.737195407s)
--- PASS: TestForceSystemdFlag (31.73s)

                                                
                                    
x
+
TestForceSystemdEnv (29.4s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-055075 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-055075 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (27.405383935s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-055075 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-055075" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-055075
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-055075: (1.703161669s)
--- PASS: TestForceSystemdEnv (29.40s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.69s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.69s)

                                                
                                    
x
+
TestErrorSpam/setup (21.06s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-419341 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-419341 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-419341 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-419341 --driver=docker  --container-runtime=docker: (21.058470885s)
--- PASS: TestErrorSpam/setup (21.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.59s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-419341 --log_dir /tmp/nospam-419341 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-419341 --log_dir /tmp/nospam-419341 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-419341 --log_dir /tmp/nospam-419341 start --dry-run
--- PASS: TestErrorSpam/start (0.59s)

                                                
                                    
x
+
TestErrorSpam/status (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-419341 --log_dir /tmp/nospam-419341 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-419341 --log_dir /tmp/nospam-419341 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-419341 --log_dir /tmp/nospam-419341 status
--- PASS: TestErrorSpam/status (0.86s)

                                                
                                    
x
+
TestErrorSpam/pause (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-419341 --log_dir /tmp/nospam-419341 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-419341 --log_dir /tmp/nospam-419341 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-419341 --log_dir /tmp/nospam-419341 pause
--- PASS: TestErrorSpam/pause (1.15s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-419341 --log_dir /tmp/nospam-419341 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-419341 --log_dir /tmp/nospam-419341 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-419341 --log_dir /tmp/nospam-419341 unpause
--- PASS: TestErrorSpam/unpause (1.37s)

                                                
                                    
x
+
TestErrorSpam/stop (10.82s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-419341 --log_dir /tmp/nospam-419341 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-419341 --log_dir /tmp/nospam-419341 stop: (10.64959258s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-419341 --log_dir /tmp/nospam-419341 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-419341 --log_dir /tmp/nospam-419341 stop
--- PASS: TestErrorSpam/stop (10.82s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19648-11129/.minikube/files/etc/test/nested/copy/17950/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (68.83s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-852853 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-852853 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m8.824799352s)
--- PASS: TestFunctional/serial/StartWithProxy (68.83s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.84s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-852853 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-852853 --alsologtostderr -v=8: (35.834393328s)
functional_test.go:663: soft start took 35.835207385s for "functional-852853" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.84s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-852853 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-852853 /tmp/TestFunctionalserialCacheCmdcacheadd_local3315022736/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 cache add minikube-local-cache-test:functional-852853
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-852853 cache add minikube-local-cache-test:functional-852853: (1.090979036s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 cache delete minikube-local-cache-test:functional-852853
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-852853
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852853 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (264.79643ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 kubectl -- --context functional-852853 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-852853 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-852853 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-852853 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.911133326s)
functional_test.go:761: restart took 41.911272784s for "functional-852853" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-852853 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 logs
--- PASS: TestFunctional/serial/LogsCmd (0.99s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 logs --file /tmp/TestFunctionalserialLogsFileCmd400452821/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-852853 logs --file /tmp/TestFunctionalserialLogsFileCmd400452821/001/logs.txt: (1.01022145s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-852853 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-852853
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-852853: exit status 115 (325.733769ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31528 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-852853 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852853 config get cpus: exit status 14 (77.007619ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852853 config get cpus: exit status 14 (57.623127ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (18.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-852853 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-852853 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 73870: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (18.84s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-852853 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-852853 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (157.156648ms)

                                                
                                                
-- stdout --
	* [functional-852853] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19648
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19648-11129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-11129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 18:14:20.918779   70009 out.go:345] Setting OutFile to fd 1 ...
	I0915 18:14:20.919034   70009 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 18:14:20.919045   70009 out.go:358] Setting ErrFile to fd 2...
	I0915 18:14:20.919049   70009 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 18:14:20.919313   70009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-11129/.minikube/bin
	I0915 18:14:20.920011   70009 out.go:352] Setting JSON to false
	I0915 18:14:20.921659   70009 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3407,"bootTime":1726420654,"procs":472,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 18:14:20.921792   70009 start.go:139] virtualization: kvm guest
	I0915 18:14:20.924594   70009 out.go:177] * [functional-852853] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 18:14:20.925925   70009 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 18:14:20.925988   70009 notify.go:220] Checking for updates...
	I0915 18:14:20.928767   70009 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 18:14:20.930052   70009 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19648-11129/kubeconfig
	I0915 18:14:20.931409   70009 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-11129/.minikube
	I0915 18:14:20.932661   70009 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 18:14:20.933821   70009 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 18:14:20.935597   70009 config.go:182] Loaded profile config "functional-852853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 18:14:20.936280   70009 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 18:14:20.961578   70009 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 18:14:20.961661   70009 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 18:14:21.020744   70009 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-15 18:14:21.009533768 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 18:14:21.020877   70009 docker.go:318] overlay module found
	I0915 18:14:21.023740   70009 out.go:177] * Using the docker driver based on existing profile
	I0915 18:14:21.025221   70009 start.go:297] selected driver: docker
	I0915 18:14:21.025244   70009 start.go:901] validating driver "docker" against &{Name:functional-852853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-852853 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 18:14:21.025343   70009 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 18:14:21.027924   70009 out.go:201] 
	W0915 18:14:21.029509   70009 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0915 18:14:21.031061   70009 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-852853 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-852853 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-852853 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (212.621723ms)

                                                
                                                
-- stdout --
	* [functional-852853] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19648
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19648-11129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-11129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 18:14:21.297577   70237 out.go:345] Setting OutFile to fd 1 ...
	I0915 18:14:21.297862   70237 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 18:14:21.297875   70237 out.go:358] Setting ErrFile to fd 2...
	I0915 18:14:21.297881   70237 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 18:14:21.298263   70237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-11129/.minikube/bin
	I0915 18:14:21.299287   70237 out.go:352] Setting JSON to false
	I0915 18:14:21.300586   70237 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3407,"bootTime":1726420654,"procs":471,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 18:14:21.300685   70237 start.go:139] virtualization: kvm guest
	I0915 18:14:21.302602   70237 out.go:177] * [functional-852853] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0915 18:14:21.304673   70237 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 18:14:21.304673   70237 notify.go:220] Checking for updates...
	I0915 18:14:21.306046   70237 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 18:14:21.307647   70237 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19648-11129/kubeconfig
	I0915 18:14:21.309031   70237 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-11129/.minikube
	I0915 18:14:21.310624   70237 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 18:14:21.312240   70237 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 18:14:21.314023   70237 config.go:182] Loaded profile config "functional-852853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 18:14:21.314567   70237 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 18:14:21.338837   70237 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 18:14:21.338931   70237 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 18:14:21.388224   70237 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-15 18:14:21.37833871 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 18:14:21.388339   70237 docker.go:318] overlay module found
	I0915 18:14:21.424118   70237 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0915 18:14:21.426202   70237 start.go:297] selected driver: docker
	I0915 18:14:21.426220   70237 start.go:901] validating driver "docker" against &{Name:functional-852853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-852853 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 18:14:21.426333   70237 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 18:14:21.444904   70237 out.go:201] 
	W0915 18:14:21.455417   70237 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0915 18:14:21.457176   70237 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-852853 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-852853 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5w7pq" [72db5d09-4975-4e49-981b-80beeac16c4f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5w7pq" [72db5d09-4975-4e49-981b-80beeac16c4f] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004029746s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31063
functional_test.go:1675: http://192.168.49.2:31063: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-5w7pq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31063
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (37.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [63e1671b-e540-405b-8dd7-ee2cef2d855a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00402462s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-852853 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-852853 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-852853 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-852853 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [adb0b2b0-0b8f-4c4a-bf25-9eb497e20693] Pending
helpers_test.go:344: "sp-pod" [adb0b2b0-0b8f-4c4a-bf25-9eb497e20693] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [adb0b2b0-0b8f-4c4a-bf25-9eb497e20693] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.032642318s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-852853 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-852853 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-852853 delete -f testdata/storage-provisioner/pod.yaml: (1.594542158s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-852853 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [77fae758-590f-4dd4-8aab-1342f4010cc2] Pending
helpers_test.go:344: "sp-pod" [77fae758-590f-4dd4-8aab-1342f4010cc2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [77fae758-590f-4dd4-8aab-1342f4010cc2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.006174882s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-852853 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (37.58s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh -n functional-852853 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 cp functional-852853:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2665838782/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh -n functional-852853 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh -n functional-852853 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-852853 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-mrfpt" [9a5b14e3-f390-42c3-9f6d-24ed18f29b22] Pending
helpers_test.go:344: "mysql-6cdb49bbb-mrfpt" [9a5b14e3-f390-42c3-9f6d-24ed18f29b22] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-mrfpt" [9a5b14e3-f390-42c3-9f6d-24ed18f29b22] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.003902807s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-852853 exec mysql-6cdb49bbb-mrfpt -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-852853 exec mysql-6cdb49bbb-mrfpt -- mysql -ppassword -e "show databases;": exit status 1 (214.485143ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-852853 exec mysql-6cdb49bbb-mrfpt -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-852853 exec mysql-6cdb49bbb-mrfpt -- mysql -ppassword -e "show databases;": exit status 1 (159.001175ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-852853 exec mysql-6cdb49bbb-mrfpt -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-852853 exec mysql-6cdb49bbb-mrfpt -- mysql -ppassword -e "show databases;": exit status 1 (113.172184ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-852853 exec mysql-6cdb49bbb-mrfpt -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.89s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/17950/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh "sudo cat /etc/test/nested/copy/17950/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/17950.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh "sudo cat /etc/ssl/certs/17950.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/17950.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh "sudo cat /usr/share/ca-certificates/17950.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/179502.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh "sudo cat /etc/ssl/certs/179502.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/179502.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh "sudo cat /usr/share/ca-certificates/179502.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-852853 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852853 ssh "sudo systemctl is-active crio": exit status 1 (259.294ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-852853 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-852853 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-4fs86" [c3eb9062-bfc5-43eb-8b40-a420a552d8e6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-4fs86" [c3eb9062-bfc5-43eb-8b40-a420a552d8e6] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.01514288s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-852853 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-852853 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-852853 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 68474: os: process already finished
helpers_test.go:502: unable to terminate pid 68198: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-852853 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-852853 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-852853 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5bfc1ad0-a5e4-48b6-b14d-8d61508798f0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5bfc1ad0-a5e4-48b6-b14d-8d61508798f0] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.00433747s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 service list -o json
functional_test.go:1494: Took "524.272464ms" to run "out/minikube-linux-amd64 -p functional-852853 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31377
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31377
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-852853 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-852853 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-852853
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-852853
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-852853 image ls --format short --alsologtostderr:
I0915 18:14:48.334291   76544 out.go:345] Setting OutFile to fd 1 ...
I0915 18:14:48.334430   76544 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 18:14:48.334441   76544 out.go:358] Setting ErrFile to fd 2...
I0915 18:14:48.334447   76544 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 18:14:48.334702   76544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-11129/.minikube/bin
I0915 18:14:48.335346   76544 config.go:182] Loaded profile config "functional-852853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 18:14:48.335468   76544 config.go:182] Loaded profile config "functional-852853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 18:14:48.335970   76544 cli_runner.go:164] Run: docker container inspect functional-852853 --format={{.State.Status}}
I0915 18:14:48.356542   76544 ssh_runner.go:195] Run: systemctl --version
I0915 18:14:48.356600   76544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-852853
I0915 18:14:48.373423   76544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/functional-852853/id_rsa Username:docker}
I0915 18:14:48.470881   76544 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-852853 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/minikube-local-cache-test | functional-852853 | 0114bd10c985e | 30B    |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kicbase/echo-server               | functional-852853 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-852853 image ls --format table --alsologtostderr:
I0915 18:14:48.887834   76844 out.go:345] Setting OutFile to fd 1 ...
I0915 18:14:48.887964   76844 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 18:14:48.887975   76844 out.go:358] Setting ErrFile to fd 2...
I0915 18:14:48.887982   76844 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 18:14:48.888163   76844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-11129/.minikube/bin
I0915 18:14:48.888737   76844 config.go:182] Loaded profile config "functional-852853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 18:14:48.888829   76844 config.go:182] Loaded profile config "functional-852853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 18:14:48.889188   76844 cli_runner.go:164] Run: docker container inspect functional-852853 --format={{.State.Status}}
I0915 18:14:48.906245   76844 ssh_runner.go:195] Run: systemctl --version
I0915 18:14:48.906291   76844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-852853
I0915 18:14:48.924475   76844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/functional-852853/id_rsa Username:docker}
I0915 18:14:49.119940   76844 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 image ls --format json --alsologtostderr
2024/09/15 18:14:48 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-852853 image ls --format json --alsologtostderr:
[{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda
1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-852853"],"size":"4940000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.
io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"0114bd10c985e35586efc27375b08e0c782023527cc9f8f388df7a5ff42faba8","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-852853"],"size":"30"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-852853 image ls --format json --alsologtostderr:
I0915 18:14:48.619242   76706 out.go:345] Setting OutFile to fd 1 ...
I0915 18:14:48.619494   76706 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 18:14:48.619545   76706 out.go:358] Setting ErrFile to fd 2...
I0915 18:14:48.619557   76706 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 18:14:48.619831   76706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-11129/.minikube/bin
I0915 18:14:48.620661   76706 config.go:182] Loaded profile config "functional-852853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 18:14:48.620823   76706 config.go:182] Loaded profile config "functional-852853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 18:14:48.621401   76706 cli_runner.go:164] Run: docker container inspect functional-852853 --format={{.State.Status}}
I0915 18:14:48.642720   76706 ssh_runner.go:195] Run: systemctl --version
I0915 18:14:48.642808   76706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-852853
I0915 18:14:48.661680   76706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/functional-852853/id_rsa Username:docker}
I0915 18:14:48.820111   76706 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-852853 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 0114bd10c985e35586efc27375b08e0c782023527cc9f8f388df7a5ff42faba8
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-852853
size: "30"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-852853
size: "4940000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-852853 image ls --format yaml --alsologtostderr:
I0915 18:14:48.395856   76607 out.go:345] Setting OutFile to fd 1 ...
I0915 18:14:48.395953   76607 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 18:14:48.395961   76607 out.go:358] Setting ErrFile to fd 2...
I0915 18:14:48.395966   76607 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 18:14:48.396155   76607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-11129/.minikube/bin
I0915 18:14:48.396754   76607 config.go:182] Loaded profile config "functional-852853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 18:14:48.396845   76607 config.go:182] Loaded profile config "functional-852853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 18:14:48.397196   76607 cli_runner.go:164] Run: docker container inspect functional-852853 --format={{.State.Status}}
I0915 18:14:48.414340   76607 ssh_runner.go:195] Run: systemctl --version
I0915 18:14:48.414409   76607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-852853
I0915 18:14:48.432254   76607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/functional-852853/id_rsa Username:docker}
I0915 18:14:48.523498   76607 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852853 ssh pgrep buildkitd: exit status 1 (274.508032ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 image build -t localhost/my-image:functional-852853 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-852853 image build -t localhost/my-image:functional-852853 testdata/build --alsologtostderr: (3.390892534s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-852853 image build -t localhost/my-image:functional-852853 testdata/build --alsologtostderr:
I0915 18:14:48.812368   76814 out.go:345] Setting OutFile to fd 1 ...
I0915 18:14:48.812643   76814 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 18:14:48.812652   76814 out.go:358] Setting ErrFile to fd 2...
I0915 18:14:48.812657   76814 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 18:14:48.812852   76814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-11129/.minikube/bin
I0915 18:14:48.813449   76814 config.go:182] Loaded profile config "functional-852853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 18:14:48.813997   76814 config.go:182] Loaded profile config "functional-852853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 18:14:48.814460   76814 cli_runner.go:164] Run: docker container inspect functional-852853 --format={{.State.Status}}
I0915 18:14:48.837292   76814 ssh_runner.go:195] Run: systemctl --version
I0915 18:14:48.837356   76814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-852853
I0915 18:14:48.861054   76814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/functional-852853/id_rsa Username:docker}
I0915 18:14:49.024118   76814 build_images.go:161] Building image from path: /tmp/build.2516670718.tar
I0915 18:14:49.024192   76814 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0915 18:14:49.033721   76814 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2516670718.tar
I0915 18:14:49.037505   76814 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2516670718.tar: stat -c "%s %y" /var/lib/minikube/build/build.2516670718.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2516670718.tar': No such file or directory
I0915 18:14:49.037532   76814 ssh_runner.go:362] scp /tmp/build.2516670718.tar --> /var/lib/minikube/build/build.2516670718.tar (3072 bytes)
I0915 18:14:49.119483   76814 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2516670718
I0915 18:14:49.130648   76814 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2516670718 -xf /var/lib/minikube/build/build.2516670718.tar
I0915 18:14:49.143577   76814 docker.go:360] Building image: /var/lib/minikube/build/build.2516670718
I0915 18:14:49.143642   76814 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-852853 /var/lib/minikube/build/build.2516670718
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:c90adbac73a637b8420eefa141ef1fa73559a38bc9abd3df01fa2fff9000c541 done
#8 naming to localhost/my-image:functional-852853 done
#8 DONE 0.0s
I0915 18:14:52.140297   76814 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-852853 /var/lib/minikube/build/build.2516670718: (2.996633694s)
I0915 18:14:52.140366   76814 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2516670718
I0915 18:14:52.148865   76814 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2516670718.tar
I0915 18:14:52.157111   76814 build_images.go:217] Built localhost/my-image:functional-852853 from /tmp/build.2516670718.tar
I0915 18:14:52.157144   76814 build_images.go:133] succeeded building to: functional-852853
I0915 18:14:52.157151   76814 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.870657179s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-852853
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.107.58 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-852853 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-852853 docker-env) && out/minikube-linux-amd64 status -p functional-852853"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-852853 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "332.502147ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "64.777164ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (19.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-852853 /tmp/TestFunctionalparallelMountCmdany-port2816524674/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726424063821887540" to /tmp/TestFunctionalparallelMountCmdany-port2816524674/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726424063821887540" to /tmp/TestFunctionalparallelMountCmdany-port2816524674/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726424063821887540" to /tmp/TestFunctionalparallelMountCmdany-port2816524674/001/test-1726424063821887540
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852853 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (280.331591ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 15 18:14 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 15 18:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 15 18:14 test-1726424063821887540
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh cat /mount-9p/test-1726424063821887540
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-852853 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e05aa814-d2d1-445b-b20b-6b25d6bf2a04] Pending
helpers_test.go:344: "busybox-mount" [e05aa814-d2d1-445b-b20b-6b25d6bf2a04] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e05aa814-d2d1-445b-b20b-6b25d6bf2a04] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e05aa814-d2d1-445b-b20b-6b25d6bf2a04] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 17.003956162s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-852853 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-852853 /tmp/TestFunctionalparallelMountCmdany-port2816524674/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (19.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 image load --daemon kicbase/echo-server:functional-852853 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "307.997847ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "46.055103ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 image load --daemon kicbase/echo-server:functional-852853 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-852853
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 image load --daemon kicbase/echo-server:functional-852853 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 image save kicbase/echo-server:functional-852853 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 image rm kicbase/echo-server:functional-852853 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-852853
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 image save --daemon kicbase/echo-server:functional-852853 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-852853
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-852853 /tmp/TestFunctionalparallelMountCmdspecific-port445802292/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852853 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (314.071983ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-852853 /tmp/TestFunctionalparallelMountCmdspecific-port445802292/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852853 ssh "sudo umount -f /mount-9p": exit status 1 (288.754588ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-852853 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-852853 /tmp/TestFunctionalparallelMountCmdspecific-port445802292/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-852853 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2489232437/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-852853 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2489232437/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-852853 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2489232437/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852853 ssh "findmnt -T" /mount1: exit status 1 (381.539488ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-852853 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-852853 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-852853 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2489232437/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-852853 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2489232437/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-852853 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2489232437/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.82s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-852853
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-852853
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-852853
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (98.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-231214 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0915 18:15:38.028316   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:15:38.035512   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:15:38.046943   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:15:38.068407   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:15:38.109820   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:15:38.191265   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:15:38.352792   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:15:38.674455   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:15:39.316443   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:15:40.598253   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:15:43.159836   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:15:48.282102   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:15:58.524153   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:16:19.005686   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-231214 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m37.816191643s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (98.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-231214 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-231214 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-231214 -- rollout status deployment/busybox: (4.720439427s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-231214 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-231214 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-231214 -- exec busybox-7dff88458-86kmc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-231214 -- exec busybox-7dff88458-k8kfl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-231214 -- exec busybox-7dff88458-qwww4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-231214 -- exec busybox-7dff88458-86kmc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-231214 -- exec busybox-7dff88458-k8kfl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-231214 -- exec busybox-7dff88458-qwww4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-231214 -- exec busybox-7dff88458-86kmc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-231214 -- exec busybox-7dff88458-k8kfl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-231214 -- exec busybox-7dff88458-qwww4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-231214 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-231214 -- exec busybox-7dff88458-86kmc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-231214 -- exec busybox-7dff88458-86kmc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-231214 -- exec busybox-7dff88458-k8kfl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-231214 -- exec busybox-7dff88458-k8kfl -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-231214 -- exec busybox-7dff88458-qwww4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-231214 -- exec busybox-7dff88458-qwww4 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-231214 -v=7 --alsologtostderr
E0915 18:16:59.967280   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-231214 -v=7 --alsologtostderr: (19.597106861s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-231214 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 cp testdata/cp-test.txt ha-231214:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 cp ha-231214:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4052333957/001/cp-test_ha-231214.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 cp ha-231214:/home/docker/cp-test.txt ha-231214-m02:/home/docker/cp-test_ha-231214_ha-231214-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m02 "sudo cat /home/docker/cp-test_ha-231214_ha-231214-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 cp ha-231214:/home/docker/cp-test.txt ha-231214-m03:/home/docker/cp-test_ha-231214_ha-231214-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m03 "sudo cat /home/docker/cp-test_ha-231214_ha-231214-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 cp ha-231214:/home/docker/cp-test.txt ha-231214-m04:/home/docker/cp-test_ha-231214_ha-231214-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m04 "sudo cat /home/docker/cp-test_ha-231214_ha-231214-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 cp testdata/cp-test.txt ha-231214-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 cp ha-231214-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4052333957/001/cp-test_ha-231214-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 cp ha-231214-m02:/home/docker/cp-test.txt ha-231214:/home/docker/cp-test_ha-231214-m02_ha-231214.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214 "sudo cat /home/docker/cp-test_ha-231214-m02_ha-231214.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 cp ha-231214-m02:/home/docker/cp-test.txt ha-231214-m03:/home/docker/cp-test_ha-231214-m02_ha-231214-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m03 "sudo cat /home/docker/cp-test_ha-231214-m02_ha-231214-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 cp ha-231214-m02:/home/docker/cp-test.txt ha-231214-m04:/home/docker/cp-test_ha-231214-m02_ha-231214-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m04 "sudo cat /home/docker/cp-test_ha-231214-m02_ha-231214-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 cp testdata/cp-test.txt ha-231214-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 cp ha-231214-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4052333957/001/cp-test_ha-231214-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 cp ha-231214-m03:/home/docker/cp-test.txt ha-231214:/home/docker/cp-test_ha-231214-m03_ha-231214.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214 "sudo cat /home/docker/cp-test_ha-231214-m03_ha-231214.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 cp ha-231214-m03:/home/docker/cp-test.txt ha-231214-m02:/home/docker/cp-test_ha-231214-m03_ha-231214-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m02 "sudo cat /home/docker/cp-test_ha-231214-m03_ha-231214-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 cp ha-231214-m03:/home/docker/cp-test.txt ha-231214-m04:/home/docker/cp-test_ha-231214-m03_ha-231214-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m04 "sudo cat /home/docker/cp-test_ha-231214-m03_ha-231214-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 cp testdata/cp-test.txt ha-231214-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 cp ha-231214-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4052333957/001/cp-test_ha-231214-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 cp ha-231214-m04:/home/docker/cp-test.txt ha-231214:/home/docker/cp-test_ha-231214-m04_ha-231214.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214 "sudo cat /home/docker/cp-test_ha-231214-m04_ha-231214.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 cp ha-231214-m04:/home/docker/cp-test.txt ha-231214-m02:/home/docker/cp-test_ha-231214-m04_ha-231214-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m02 "sudo cat /home/docker/cp-test_ha-231214-m04_ha-231214-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 cp ha-231214-m04:/home/docker/cp-test.txt ha-231214-m03:/home/docker/cp-test_ha-231214-m04_ha-231214-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 ssh -n ha-231214-m03 "sudo cat /home/docker/cp-test_ha-231214-m04_ha-231214-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-231214 node stop m02 -v=7 --alsologtostderr: (10.797097745s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-231214 status -v=7 --alsologtostderr: exit status 7 (665.828389ms)

                                                
                                                
-- stdout --
	ha-231214
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-231214-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-231214-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-231214-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 18:17:29.545781  103990 out.go:345] Setting OutFile to fd 1 ...
	I0915 18:17:29.545882  103990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 18:17:29.545890  103990 out.go:358] Setting ErrFile to fd 2...
	I0915 18:17:29.545894  103990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 18:17:29.546062  103990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-11129/.minikube/bin
	I0915 18:17:29.546220  103990 out.go:352] Setting JSON to false
	I0915 18:17:29.546251  103990 mustload.go:65] Loading cluster: ha-231214
	I0915 18:17:29.546362  103990 notify.go:220] Checking for updates...
	I0915 18:17:29.546731  103990 config.go:182] Loaded profile config "ha-231214": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 18:17:29.546747  103990 status.go:255] checking status of ha-231214 ...
	I0915 18:17:29.547186  103990 cli_runner.go:164] Run: docker container inspect ha-231214 --format={{.State.Status}}
	I0915 18:17:29.565311  103990 status.go:330] ha-231214 host status = "Running" (err=<nil>)
	I0915 18:17:29.565332  103990 host.go:66] Checking if "ha-231214" exists ...
	I0915 18:17:29.565567  103990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-231214
	I0915 18:17:29.585203  103990 host.go:66] Checking if "ha-231214" exists ...
	I0915 18:17:29.585605  103990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 18:17:29.585673  103990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-231214
	I0915 18:17:29.603952  103990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/ha-231214/id_rsa Username:docker}
	I0915 18:17:29.696147  103990 ssh_runner.go:195] Run: systemctl --version
	I0915 18:17:29.700300  103990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 18:17:29.711903  103990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 18:17:29.762625  103990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-15 18:17:29.75296286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 18:17:29.763300  103990 kubeconfig.go:125] found "ha-231214" server: "https://192.168.49.254:8443"
	I0915 18:17:29.763329  103990 api_server.go:166] Checking apiserver status ...
	I0915 18:17:29.763366  103990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 18:17:29.774961  103990 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2358/cgroup
	I0915 18:17:29.784435  103990 api_server.go:182] apiserver freezer: "7:freezer:/docker/eabf3c10851d1c0a1cc2df896255d4bb0b095d83e30a16eb76467c930edd779f/kubepods/burstable/pod947e415054d7661205c0e954a588f671/7ffb7b735f5acc014ee8f0f551d58f560852801007d19489867b3316030f00e7"
	I0915 18:17:29.784517  103990 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/eabf3c10851d1c0a1cc2df896255d4bb0b095d83e30a16eb76467c930edd779f/kubepods/burstable/pod947e415054d7661205c0e954a588f671/7ffb7b735f5acc014ee8f0f551d58f560852801007d19489867b3316030f00e7/freezer.state
	I0915 18:17:29.792818  103990 api_server.go:204] freezer state: "THAWED"
	I0915 18:17:29.792843  103990 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0915 18:17:29.796620  103990 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0915 18:17:29.796646  103990 status.go:422] ha-231214 apiserver status = Running (err=<nil>)
	I0915 18:17:29.796659  103990 status.go:257] ha-231214 status: &{Name:ha-231214 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 18:17:29.796680  103990 status.go:255] checking status of ha-231214-m02 ...
	I0915 18:17:29.796933  103990 cli_runner.go:164] Run: docker container inspect ha-231214-m02 --format={{.State.Status}}
	I0915 18:17:29.816495  103990 status.go:330] ha-231214-m02 host status = "Stopped" (err=<nil>)
	I0915 18:17:29.816520  103990 status.go:343] host is not running, skipping remaining checks
	I0915 18:17:29.816528  103990 status.go:257] ha-231214-m02 status: &{Name:ha-231214-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 18:17:29.816552  103990 status.go:255] checking status of ha-231214-m03 ...
	I0915 18:17:29.816809  103990 cli_runner.go:164] Run: docker container inspect ha-231214-m03 --format={{.State.Status}}
	I0915 18:17:29.834442  103990 status.go:330] ha-231214-m03 host status = "Running" (err=<nil>)
	I0915 18:17:29.834463  103990 host.go:66] Checking if "ha-231214-m03" exists ...
	I0915 18:17:29.834738  103990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-231214-m03
	I0915 18:17:29.852485  103990 host.go:66] Checking if "ha-231214-m03" exists ...
	I0915 18:17:29.852783  103990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 18:17:29.852836  103990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-231214-m03
	I0915 18:17:29.870924  103990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/ha-231214-m03/id_rsa Username:docker}
	I0915 18:17:29.964159  103990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 18:17:29.975473  103990 kubeconfig.go:125] found "ha-231214" server: "https://192.168.49.254:8443"
	I0915 18:17:29.975499  103990 api_server.go:166] Checking apiserver status ...
	I0915 18:17:29.975530  103990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 18:17:29.986555  103990 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2254/cgroup
	I0915 18:17:29.995682  103990 api_server.go:182] apiserver freezer: "7:freezer:/docker/e0da6b2c4d145cf18d30495560b8bdae296d3ce2dda644fc6edb6946a24d63b0/kubepods/burstable/podc300d87aeb06f9d45e7a23f4a6e2219a/52e84b28fb0fc0104055310ef2c0bb1518d106cc50e3fd879550860a1c5295c2"
	I0915 18:17:29.995748  103990 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e0da6b2c4d145cf18d30495560b8bdae296d3ce2dda644fc6edb6946a24d63b0/kubepods/burstable/podc300d87aeb06f9d45e7a23f4a6e2219a/52e84b28fb0fc0104055310ef2c0bb1518d106cc50e3fd879550860a1c5295c2/freezer.state
	I0915 18:17:30.004115  103990 api_server.go:204] freezer state: "THAWED"
	I0915 18:17:30.004144  103990 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0915 18:17:30.008033  103990 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0915 18:17:30.008065  103990 status.go:422] ha-231214-m03 apiserver status = Running (err=<nil>)
	I0915 18:17:30.008074  103990 status.go:257] ha-231214-m03 status: &{Name:ha-231214-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 18:17:30.008090  103990 status.go:255] checking status of ha-231214-m04 ...
	I0915 18:17:30.008364  103990 cli_runner.go:164] Run: docker container inspect ha-231214-m04 --format={{.State.Status}}
	I0915 18:17:30.026735  103990 status.go:330] ha-231214-m04 host status = "Running" (err=<nil>)
	I0915 18:17:30.026805  103990 host.go:66] Checking if "ha-231214-m04" exists ...
	I0915 18:17:30.027116  103990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-231214-m04
	I0915 18:17:30.044652  103990 host.go:66] Checking if "ha-231214-m04" exists ...
	I0915 18:17:30.044969  103990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 18:17:30.045009  103990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-231214-m04
	I0915 18:17:30.064933  103990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/ha-231214-m04/id_rsa Username:docker}
	I0915 18:17:30.155902  103990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 18:17:30.166706  103990 status.go:257] ha-231214-m04 status: &{Name:ha-231214-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-231214 node start m02 -v=7 --alsologtostderr: (18.321286958s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (6.311564766s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (277.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-231214 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-231214 -v=7 --alsologtostderr
E0915 18:18:21.889278   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-231214 -v=7 --alsologtostderr: (33.814860516s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-231214 --wait=true -v=7 --alsologtostderr
E0915 18:19:09.507794   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:19:09.514240   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:19:09.525787   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:19:09.547194   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:19:09.588666   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:19:09.670140   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:19:09.831461   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:19:10.153195   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:19:10.795065   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:19:12.076859   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:19:14.638887   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:19:19.760574   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:19:30.002600   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:19:50.484596   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:20:31.447003   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:20:38.028974   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:21:05.732803   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:21:53.368738   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-231214 --wait=true -v=7 --alsologtostderr: (4m3.33066427s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-231214
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (277.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-231214 node delete m03 -v=7 --alsologtostderr: (8.824751306s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-231214 stop -v=7 --alsologtostderr: (32.656895924s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-231214 status -v=7 --alsologtostderr: exit status 7 (102.098734ms)

                                                
                                                
-- stdout --
	ha-231214
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-231214-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-231214-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 18:23:16.278830  135398 out.go:345] Setting OutFile to fd 1 ...
	I0915 18:23:16.278968  135398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 18:23:16.278980  135398 out.go:358] Setting ErrFile to fd 2...
	I0915 18:23:16.278987  135398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 18:23:16.279169  135398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-11129/.minikube/bin
	I0915 18:23:16.279356  135398 out.go:352] Setting JSON to false
	I0915 18:23:16.279390  135398 mustload.go:65] Loading cluster: ha-231214
	I0915 18:23:16.279476  135398 notify.go:220] Checking for updates...
	I0915 18:23:16.280012  135398 config.go:182] Loaded profile config "ha-231214": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 18:23:16.280034  135398 status.go:255] checking status of ha-231214 ...
	I0915 18:23:16.280531  135398 cli_runner.go:164] Run: docker container inspect ha-231214 --format={{.State.Status}}
	I0915 18:23:16.299681  135398 status.go:330] ha-231214 host status = "Stopped" (err=<nil>)
	I0915 18:23:16.299710  135398 status.go:343] host is not running, skipping remaining checks
	I0915 18:23:16.299718  135398 status.go:257] ha-231214 status: &{Name:ha-231214 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 18:23:16.299747  135398 status.go:255] checking status of ha-231214-m02 ...
	I0915 18:23:16.299993  135398 cli_runner.go:164] Run: docker container inspect ha-231214-m02 --format={{.State.Status}}
	I0915 18:23:16.318621  135398 status.go:330] ha-231214-m02 host status = "Stopped" (err=<nil>)
	I0915 18:23:16.318643  135398 status.go:343] host is not running, skipping remaining checks
	I0915 18:23:16.318649  135398 status.go:257] ha-231214-m02 status: &{Name:ha-231214-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 18:23:16.318672  135398 status.go:255] checking status of ha-231214-m04 ...
	I0915 18:23:16.318994  135398 cli_runner.go:164] Run: docker container inspect ha-231214-m04 --format={{.State.Status}}
	I0915 18:23:16.336525  135398 status.go:330] ha-231214-m04 host status = "Stopped" (err=<nil>)
	I0915 18:23:16.336568  135398 status.go:343] host is not running, skipping remaining checks
	I0915 18:23:16.336585  135398 status.go:257] ha-231214-m04 status: &{Name:ha-231214-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (85.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-231214 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0915 18:24:09.507199   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:24:37.210935   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-231214 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m24.688881729s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (85.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (37.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-231214 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-231214 --control-plane -v=7 --alsologtostderr: (36.482299252s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-231214 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (37.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.64s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (24.45s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-708775 --driver=docker  --container-runtime=docker
E0915 18:25:38.028975   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-708775 --driver=docker  --container-runtime=docker: (24.450896171s)
--- PASS: TestImageBuild/serial/Setup (24.45s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-708775
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-708775: (2.505023937s)
--- PASS: TestImageBuild/serial/NormalBuild (2.51s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-708775
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-708775: (1.058129505s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.06s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.84s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-708775
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.84s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-708775
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (64.15s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-583432 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-583432 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m4.149303366s)
--- PASS: TestJSONOutput/start/Command (64.15s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-583432 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.44s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-583432 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-583432 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-583432 --output=json --user=testUser: (5.727061371s)
--- PASS: TestJSONOutput/stop/Command (5.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-589788 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-589788 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.474481ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a5dca621-8233-40c9-a531-c87446fd8e8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-589788] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"651d7af4-67f6-4262-8a36-ad2899c9b52e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19648"}}
	{"specversion":"1.0","id":"9c69d729-c0fd-4260-98a0-74e2e42613d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"30746f94-b007-4f71-a207-105c6684d826","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19648-11129/kubeconfig"}}
	{"specversion":"1.0","id":"70b7dbce-a53b-4581-be62-5737d73a580b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-11129/.minikube"}}
	{"specversion":"1.0","id":"d75811eb-ef3b-4a8d-b08a-9b8738a97fa3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4ec75cb1-1e36-4cf4-81f1-d520c9c630e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"eb8e7419-2a5f-4b22-82c6-9d5ba1fcb711","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-589788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-589788
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.94s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-031510 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-031510 --network=: (24.908925286s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-031510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-031510
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-031510: (2.013699923s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.94s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.82s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-144682 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-144682 --network=bridge: (23.97098442s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-144682" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-144682
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-144682: (1.827675745s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.82s)

                                                
                                    
x
+
TestKicExistingNetwork (25.69s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-250836 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-250836 --network=existing-network: (23.746994666s)
helpers_test.go:175: Cleaning up "existing-network-250836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-250836
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-250836: (1.796731018s)
--- PASS: TestKicExistingNetwork (25.69s)

                                                
                                    
x
+
TestKicCustomSubnet (23.54s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-002673 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-002673 --subnet=192.168.60.0/24: (21.486488334s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-002673 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-002673" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-002673
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-002673: (2.030955905s)
--- PASS: TestKicCustomSubnet (23.54s)

                                                
                                    
x
+
TestKicStaticIP (26.55s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-516633 --static-ip=192.168.200.200
E0915 18:29:09.507601   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-516633 --static-ip=192.168.200.200: (24.524981978s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-516633 ip
helpers_test.go:175: Cleaning up "static-ip-516633" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-516633
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-516633: (1.904062407s)
--- PASS: TestKicStaticIP (26.55s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (50.24s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-289784 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-289784 --driver=docker  --container-runtime=docker: (21.145339916s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-299991 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-299991 --driver=docker  --container-runtime=docker: (23.903891407s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-289784
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-299991
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-299991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-299991
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-299991: (2.04409299s)
helpers_test.go:175: Cleaning up "first-289784" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-289784
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-289784: (2.064256151s)
--- PASS: TestMinikubeProfile (50.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-237531 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-237531 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.308643416s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-237531 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-248968 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-248968 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.46325369s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-248968 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.45s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-237531 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-237531 --alsologtostderr -v=5: (1.453070846s)
--- PASS: TestMountStart/serial/DeleteFirst (1.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-248968 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-248968
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-248968: (1.172545301s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.36s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-248968
E0915 18:30:38.028934   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-248968: (7.357409191s)
--- PASS: TestMountStart/serial/RestartStopped (8.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-248968 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (71.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-416065 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-416065 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m11.071250897s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (71.58s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (37.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-416065 -- rollout status deployment/busybox: (3.035925037s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0915 18:32:01.094494   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- exec busybox-7dff88458-9qch8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- exec busybox-7dff88458-nw5wd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- exec busybox-7dff88458-9qch8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- exec busybox-7dff88458-nw5wd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- exec busybox-7dff88458-9qch8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- exec busybox-7dff88458-nw5wd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (37.07s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- exec busybox-7dff88458-9qch8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- exec busybox-7dff88458-9qch8 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- exec busybox-7dff88458-nw5wd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-416065 -- exec busybox-7dff88458-nw5wd -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (14.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-416065 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-416065 -v 3 --alsologtostderr: (13.805672101s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (14.46s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-416065 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 cp testdata/cp-test.txt multinode-416065:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 ssh -n multinode-416065 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 cp multinode-416065:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2159971594/001/cp-test_multinode-416065.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 ssh -n multinode-416065 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 cp multinode-416065:/home/docker/cp-test.txt multinode-416065-m02:/home/docker/cp-test_multinode-416065_multinode-416065-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 ssh -n multinode-416065 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 ssh -n multinode-416065-m02 "sudo cat /home/docker/cp-test_multinode-416065_multinode-416065-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 cp multinode-416065:/home/docker/cp-test.txt multinode-416065-m03:/home/docker/cp-test_multinode-416065_multinode-416065-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 ssh -n multinode-416065 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 ssh -n multinode-416065-m03 "sudo cat /home/docker/cp-test_multinode-416065_multinode-416065-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 cp testdata/cp-test.txt multinode-416065-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 ssh -n multinode-416065-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 cp multinode-416065-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2159971594/001/cp-test_multinode-416065-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 ssh -n multinode-416065-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 cp multinode-416065-m02:/home/docker/cp-test.txt multinode-416065:/home/docker/cp-test_multinode-416065-m02_multinode-416065.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 ssh -n multinode-416065-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 ssh -n multinode-416065 "sudo cat /home/docker/cp-test_multinode-416065-m02_multinode-416065.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 cp multinode-416065-m02:/home/docker/cp-test.txt multinode-416065-m03:/home/docker/cp-test_multinode-416065-m02_multinode-416065-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 ssh -n multinode-416065-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 ssh -n multinode-416065-m03 "sudo cat /home/docker/cp-test_multinode-416065-m02_multinode-416065-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 cp testdata/cp-test.txt multinode-416065-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 ssh -n multinode-416065-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 cp multinode-416065-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2159971594/001/cp-test_multinode-416065-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 ssh -n multinode-416065-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 cp multinode-416065-m03:/home/docker/cp-test.txt multinode-416065:/home/docker/cp-test_multinode-416065-m03_multinode-416065.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 ssh -n multinode-416065-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 ssh -n multinode-416065 "sudo cat /home/docker/cp-test_multinode-416065-m03_multinode-416065.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 cp multinode-416065-m03:/home/docker/cp-test.txt multinode-416065-m02:/home/docker/cp-test_multinode-416065-m03_multinode-416065-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 ssh -n multinode-416065-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 ssh -n multinode-416065-m02 "sudo cat /home/docker/cp-test_multinode-416065-m03_multinode-416065-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-416065 node stop m03: (1.170148482s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-416065 status: exit status 7 (459.070543ms)

                                                
                                                
-- stdout --
	multinode-416065
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-416065-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-416065-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-416065 status --alsologtostderr: exit status 7 (454.412445ms)

                                                
                                                
-- stdout --
	multinode-416065
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-416065-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-416065-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 18:33:00.103876  222010 out.go:345] Setting OutFile to fd 1 ...
	I0915 18:33:00.103985  222010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 18:33:00.103994  222010 out.go:358] Setting ErrFile to fd 2...
	I0915 18:33:00.103998  222010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 18:33:00.104198  222010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-11129/.minikube/bin
	I0915 18:33:00.104376  222010 out.go:352] Setting JSON to false
	I0915 18:33:00.104403  222010 mustload.go:65] Loading cluster: multinode-416065
	I0915 18:33:00.104520  222010 notify.go:220] Checking for updates...
	I0915 18:33:00.104888  222010 config.go:182] Loaded profile config "multinode-416065": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 18:33:00.104906  222010 status.go:255] checking status of multinode-416065 ...
	I0915 18:33:00.105397  222010 cli_runner.go:164] Run: docker container inspect multinode-416065 --format={{.State.Status}}
	I0915 18:33:00.123121  222010 status.go:330] multinode-416065 host status = "Running" (err=<nil>)
	I0915 18:33:00.123148  222010 host.go:66] Checking if "multinode-416065" exists ...
	I0915 18:33:00.123424  222010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-416065
	I0915 18:33:00.142065  222010 host.go:66] Checking if "multinode-416065" exists ...
	I0915 18:33:00.142380  222010 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 18:33:00.142430  222010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-416065
	I0915 18:33:00.159207  222010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/multinode-416065/id_rsa Username:docker}
	I0915 18:33:00.251735  222010 ssh_runner.go:195] Run: systemctl --version
	I0915 18:33:00.255883  222010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 18:33:00.266834  222010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 18:33:00.311881  222010 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-15 18:33:00.302678091 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 18:33:00.312488  222010 kubeconfig.go:125] found "multinode-416065" server: "https://192.168.67.2:8443"
	I0915 18:33:00.312514  222010 api_server.go:166] Checking apiserver status ...
	I0915 18:33:00.312552  222010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 18:33:00.323617  222010 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2275/cgroup
	I0915 18:33:00.332473  222010 api_server.go:182] apiserver freezer: "7:freezer:/docker/7f4f12e4528df0fa842057a7d1bb2918f485142c274acca419315899116dcf2b/kubepods/burstable/podb79b43a3768a2e73f0411eaa9a883263/e71abfb929406204179298bbbdc5fa0b4a08478f7052ac77b83fe37051977e01"
	I0915 18:33:00.332549  222010 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7f4f12e4528df0fa842057a7d1bb2918f485142c274acca419315899116dcf2b/kubepods/burstable/podb79b43a3768a2e73f0411eaa9a883263/e71abfb929406204179298bbbdc5fa0b4a08478f7052ac77b83fe37051977e01/freezer.state
	I0915 18:33:00.340677  222010 api_server.go:204] freezer state: "THAWED"
	I0915 18:33:00.340708  222010 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0915 18:33:00.344361  222010 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0915 18:33:00.344385  222010 status.go:422] multinode-416065 apiserver status = Running (err=<nil>)
	I0915 18:33:00.344398  222010 status.go:257] multinode-416065 status: &{Name:multinode-416065 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 18:33:00.344419  222010 status.go:255] checking status of multinode-416065-m02 ...
	I0915 18:33:00.344663  222010 cli_runner.go:164] Run: docker container inspect multinode-416065-m02 --format={{.State.Status}}
	I0915 18:33:00.361489  222010 status.go:330] multinode-416065-m02 host status = "Running" (err=<nil>)
	I0915 18:33:00.361511  222010 host.go:66] Checking if "multinode-416065-m02" exists ...
	I0915 18:33:00.361770  222010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-416065-m02
	I0915 18:33:00.379167  222010 host.go:66] Checking if "multinode-416065-m02" exists ...
	I0915 18:33:00.379407  222010 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 18:33:00.379451  222010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-416065-m02
	I0915 18:33:00.395968  222010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/19648-11129/.minikube/machines/multinode-416065-m02/id_rsa Username:docker}
	I0915 18:33:00.487583  222010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 18:33:00.498275  222010 status.go:257] multinode-416065-m02 status: &{Name:multinode-416065-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0915 18:33:00.498304  222010 status.go:255] checking status of multinode-416065-m03 ...
	I0915 18:33:00.498607  222010 cli_runner.go:164] Run: docker container inspect multinode-416065-m03 --format={{.State.Status}}
	I0915 18:33:00.515498  222010 status.go:330] multinode-416065-m03 host status = "Stopped" (err=<nil>)
	I0915 18:33:00.515521  222010 status.go:343] host is not running, skipping remaining checks
	I0915 18:33:00.515535  222010 status.go:257] multinode-416065-m03 status: &{Name:multinode-416065-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.08s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-416065 node start m03 -v=7 --alsologtostderr: (9.258532862s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (94.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-416065
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-416065
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-416065: (22.38376465s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-416065 --wait=true -v=8 --alsologtostderr
E0915 18:34:09.507041   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-416065 --wait=true -v=8 --alsologtostderr: (1m12.075442663s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-416065
--- PASS: TestMultiNode/serial/RestartKeepsNodes (94.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-416065 node delete m03: (4.607024694s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-416065 stop: (21.26470275s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-416065 status: exit status 7 (84.366725ms)

                                                
                                                
-- stdout --
	multinode-416065
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-416065-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-416065 status --alsologtostderr: exit status 7 (80.018906ms)

                                                
                                                
-- stdout --
	multinode-416065
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-416065-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 18:35:11.549963  237345 out.go:345] Setting OutFile to fd 1 ...
	I0915 18:35:11.550211  237345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 18:35:11.550218  237345 out.go:358] Setting ErrFile to fd 2...
	I0915 18:35:11.550223  237345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 18:35:11.550404  237345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-11129/.minikube/bin
	I0915 18:35:11.550569  237345 out.go:352] Setting JSON to false
	I0915 18:35:11.550597  237345 mustload.go:65] Loading cluster: multinode-416065
	I0915 18:35:11.550743  237345 notify.go:220] Checking for updates...
	I0915 18:35:11.551069  237345 config.go:182] Loaded profile config "multinode-416065": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 18:35:11.551085  237345 status.go:255] checking status of multinode-416065 ...
	I0915 18:35:11.551509  237345 cli_runner.go:164] Run: docker container inspect multinode-416065 --format={{.State.Status}}
	I0915 18:35:11.568511  237345 status.go:330] multinode-416065 host status = "Stopped" (err=<nil>)
	I0915 18:35:11.568532  237345 status.go:343] host is not running, skipping remaining checks
	I0915 18:35:11.568540  237345 status.go:257] multinode-416065 status: &{Name:multinode-416065 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 18:35:11.568568  237345 status.go:255] checking status of multinode-416065-m02 ...
	I0915 18:35:11.568838  237345 cli_runner.go:164] Run: docker container inspect multinode-416065-m02 --format={{.State.Status}}
	I0915 18:35:11.586437  237345 status.go:330] multinode-416065-m02 host status = "Stopped" (err=<nil>)
	I0915 18:35:11.586461  237345 status.go:343] host is not running, skipping remaining checks
	I0915 18:35:11.586466  237345 status.go:257] multinode-416065-m02 status: &{Name:multinode-416065-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.43s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-416065 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0915 18:35:32.572371   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:35:38.028552   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-416065 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (51.585308951s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-416065 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.13s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (27.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-416065
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-416065-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-416065-m02 --driver=docker  --container-runtime=docker: exit status 14 (59.465194ms)

                                                
                                                
-- stdout --
	* [multinode-416065-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19648
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19648-11129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-11129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-416065-m02' is duplicated with machine name 'multinode-416065-m02' in profile 'multinode-416065'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-416065-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-416065-m03 --driver=docker  --container-runtime=docker: (25.268589871s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-416065
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-416065: exit status 80 (263.928047ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-416065 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-416065-m03 already exists in multinode-416065-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-416065-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-416065-m03: (1.931613431s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (27.57s)

                                                
                                    
x
+
TestPreload (106.24s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-012522 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-012522 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (56.580026938s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-012522 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-012522 image pull gcr.io/k8s-minikube/busybox: (1.897109275s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-012522
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-012522: (10.721211408s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-012522 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-012522 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (34.626780865s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-012522 image list
helpers_test.go:175: Cleaning up "test-preload-012522" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-012522
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-012522: (2.145961657s)
--- PASS: TestPreload (106.24s)

                                                
                                    
x
+
TestScheduledStopUnix (95.8s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-158379 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-158379 --memory=2048 --driver=docker  --container-runtime=docker: (22.935671956s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-158379 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-158379 -n scheduled-stop-158379
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-158379 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-158379 --cancel-scheduled
E0915 18:39:09.506965   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-158379 -n scheduled-stop-158379
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-158379
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-158379 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-158379
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-158379: exit status 7 (59.772071ms)

                                                
                                                
-- stdout --
	scheduled-stop-158379
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-158379 -n scheduled-stop-158379
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-158379 -n scheduled-stop-158379: exit status 7 (59.029963ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-158379" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-158379
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-158379: (1.609508819s)
--- PASS: TestScheduledStopUnix (95.80s)

                                                
                                    
x
+
TestSkaffold (102.74s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3887219749 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-792816 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-792816 --memory=2600 --driver=docker  --container-runtime=docker: (21.088656173s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3887219749 run --minikube-profile skaffold-792816 --kube-context skaffold-792816 --status-check=true --port-forward=false --interactive=false
E0915 18:40:38.028592   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3887219749 run --minikube-profile skaffold-792816 --kube-context skaffold-792816 --status-check=true --port-forward=false --interactive=false: (1m5.022998017s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5d96c965cf-zq7g8" [39c1cac9-5287-4609-a493-72c3973ec923] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003387448s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-58d65bf94-q8ms9" [2b672ced-bcab-4e92-8b09-1e723769e123] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003828713s
helpers_test.go:175: Cleaning up "skaffold-792816" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-792816
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-792816: (2.688773086s)
--- PASS: TestSkaffold (102.74s)

                                                
                                    
x
+
TestInsufficientStorage (9.68s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-022582 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-022582 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (7.556386989s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fba87462-b664-4a05-b95f-487dabfe21a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-022582] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a9e17130-959d-4568-b496-0a29ced10a90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19648"}}
	{"specversion":"1.0","id":"2b2afe78-e014-409c-87fb-3950c405aa1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c454b97f-4c1f-47db-9c09-9caae5dbd575","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19648-11129/kubeconfig"}}
	{"specversion":"1.0","id":"36c82c17-4a6b-4bd1-816b-b861c5e4dafa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-11129/.minikube"}}
	{"specversion":"1.0","id":"53e2b1e6-38f3-41a9-b3d0-d4a356a7d87d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"fab99e1c-626d-42d8-82a5-2e01c89f70f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"37d990b8-3a8b-43f9-a922-254238aafcc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a6a1f313-ea06-4ff1-a1ad-6aeb427a4791","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"db9ecb2e-3480-4732-bd18-3279f7d5aeed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6dc5bf28-aae4-4932-8249-4a4d914ae4e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"29b4d449-07c7-4bd4-a5fe-d12050217132","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-022582\" primary control-plane node in \"insufficient-storage-022582\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7b4024bc-faf4-44c3-b6f9-07d9efb7d4d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726358845-19644 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0a85bef7-cfd6-4286-93c1-ef3c6adf2cdc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a89027d3-63f9-4d28-8667-663e6f5e716e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-022582 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-022582 --output=json --layout=cluster: exit status 7 (254.777806ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-022582","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-022582","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 18:41:47.712067  277462 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-022582" does not appear in /home/jenkins/minikube-integration/19648-11129/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-022582 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-022582 --output=json --layout=cluster: exit status 7 (255.00898ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-022582","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-022582","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 18:41:47.967877  277561 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-022582" does not appear in /home/jenkins/minikube-integration/19648-11129/kubeconfig
	E0915 18:41:47.977265  277561 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/insufficient-storage-022582/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-022582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-022582
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-022582: (1.613576037s)
--- PASS: TestInsufficientStorage (9.68s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (62.64s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.276267181 start -p running-upgrade-872530 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.276267181 start -p running-upgrade-872530 --memory=2200 --vm-driver=docker  --container-runtime=docker: (36.425215565s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-872530 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0915 18:45:38.029098   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-872530 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (21.51742447s)
helpers_test.go:175: Cleaning up "running-upgrade-872530" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-872530
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-872530: (2.148422502s)
--- PASS: TestRunningBinaryUpgrade (62.64s)

                                                
                                    
x
+
TestKubernetesUpgrade (337.84s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-010221 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-010221 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (40.406960342s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-010221
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-010221: (1.263278614s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-010221 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-010221 status --format={{.Host}}: exit status 7 (86.374507ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-010221 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-010221 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m30.434429067s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-010221 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-010221 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-010221 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (94.482755ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-010221] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19648
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19648-11129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-11129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-010221
	    minikube start -p kubernetes-upgrade-010221 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0102212 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-010221 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-010221 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-010221 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (22.948260757s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-010221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-010221
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-010221: (2.51833083s)
--- PASS: TestKubernetesUpgrade (337.84s)

                                                
                                    
x
+
TestMissingContainerUpgrade (154.3s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.846735925 start -p missing-upgrade-647671 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.846735925 start -p missing-upgrade-647671 --memory=2200 --driver=docker  --container-runtime=docker: (1m36.310892343s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-647671
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-647671: (1.667294478s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-647671
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-647671 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0915 18:44:09.507800   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-647671 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (51.68727497s)
helpers_test.go:175: Cleaning up "missing-upgrade-647671" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-647671
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-647671: (2.229912376s)
--- PASS: TestMissingContainerUpgrade (154.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-836780 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-836780 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (72.7001ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-836780] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19648
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19648-11129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-11129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (29.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-836780 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-836780 --driver=docker  --container-runtime=docker: (29.235176049s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-836780 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (29.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (148.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3964558535 start -p stopped-upgrade-856757 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3964558535 start -p stopped-upgrade-856757 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m52.601689661s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3964558535 -p stopped-upgrade-856757 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3964558535 -p stopped-upgrade-856757 stop: (11.132751188s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-856757 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-856757 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (24.596607449s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (148.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-836780 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-836780 --no-kubernetes --driver=docker  --container-runtime=docker: (15.075218333s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-836780 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-836780 status -o json: exit status 2 (263.176124ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-836780","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-836780
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-836780: (1.642049594s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-836780 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-836780 --no-kubernetes --driver=docker  --container-runtime=docker: (11.59880475s)
--- PASS: TestNoKubernetes/serial/Start (11.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-836780 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-836780 "sudo systemctl is-active --quiet service kubelet": exit status 1 (323.064228ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-836780
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-836780: (1.462173973s)
--- PASS: TestNoKubernetes/serial/Stop (1.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-836780 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-836780 --driver=docker  --container-runtime=docker: (8.3600251s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-836780 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-836780 "sudo systemctl is-active --quiet service kubelet": exit status 1 (296.067442ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-856757
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-856757: (1.124556342s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                    
x
+
TestPause/serial/Start (38.81s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-300607 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-300607 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (38.81326997s)
--- PASS: TestPause/serial/Start (38.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (32.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-887220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-887220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (32.87903066s)
--- PASS: TestNetworkPlugins/group/auto/Start (32.88s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.95s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-300607 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0915 18:46:26.205767   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/skaffold-792816/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:46:26.212153   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/skaffold-792816/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:46:26.223575   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/skaffold-792816/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:46:26.244981   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/skaffold-792816/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:46:26.286364   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/skaffold-792816/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:46:26.367840   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/skaffold-792816/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:46:26.529356   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/skaffold-792816/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:46:26.851399   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/skaffold-792816/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-300607 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.937558987s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-887220 "pgrep -a kubelet"
E0915 18:46:27.493404   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/skaffold-792816/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-887220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-brp96" [9ed8fc99-1c5b-48f6-9905-de06d1f97cf7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0915 18:46:28.775105   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/skaffold-792816/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-brp96" [9ed8fc99-1c5b-48f6-9905-de06d1f97cf7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004174146s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.21s)

                                                
                                    
x
+
TestPause/serial/Pause (0.55s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-300607 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.55s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-300607 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-300607 --output=json --layout=cluster: exit status 2 (323.059014ms)

                                                
                                                
-- stdout --
	{"Name":"pause-300607","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-300607","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.45s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-300607 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.45s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.62s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-300607 --alsologtostderr -v=5
E0915 18:46:31.336786   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/skaffold-792816/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestPause/serial/PauseAgain (0.62s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.04s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-300607 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-300607 --alsologtostderr -v=5: (2.04366053s)
--- PASS: TestPause/serial/DeletePaused (2.04s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.69s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-300607
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-300607: exit status 1 (15.884791ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-300607: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (58.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-887220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-887220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (58.114960187s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (58.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (20.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-887220 exec deployment/netcat -- nslookup kubernetes.default
E0915 18:46:36.458519   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/skaffold-792816/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:46:46.700271   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/skaffold-792816/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-887220 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.16141833s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context auto-887220 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context auto-887220 exec deployment/netcat -- nslookup kubernetes.default: (5.13025936s)
--- PASS: TestNetworkPlugins/group/auto/DNS (20.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-887220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-887220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-887220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-887220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m17.908636568s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-t8twg" [9ab6b8ec-9402-40f1-be35-53278894a67b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004304909s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-887220 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-887220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4ctqf" [3defedb2-365a-4b32-9b1a-8488edf1cd3e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4ctqf" [3defedb2-365a-4b32-9b1a-8488edf1cd3e] Running
E0915 18:47:48.143583   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/skaffold-792816/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003791075s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-887220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-887220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-887220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-887220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-887220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (51.440336295s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (67.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-887220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-887220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m7.404929079s)
--- PASS: TestNetworkPlugins/group/false/Start (67.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-79xhb" [a09a36ad-f918-4988-875d-dc94f79a81f6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00581415s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (67.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-887220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-887220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m7.080188196s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (67.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-887220 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-887220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2txxf" [f8723ca3-2e21-4f54-a635-1ba09c15fb5c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0915 18:48:41.096333   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-2txxf" [f8723ca3-2e21-4f54-a635-1ba09c15fb5c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004334717s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-887220 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-887220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nmjbh" [c68e326c-da75-4668-971b-9c47bf74ac68] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nmjbh" [c68e326c-da75-4668-971b-9c47bf74ac68] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003365332s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-887220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-887220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-887220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-887220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-887220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-887220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (47.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-887220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-887220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (47.68339668s)
--- PASS: TestNetworkPlugins/group/flannel/Start (47.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-887220 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-887220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kpfkw" [df3d95fc-1176-4393-9b71-43d080bd6d21] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kpfkw" [df3d95fc-1176-4393-9b71-43d080bd6d21] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.004417939s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (64.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-887220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-887220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m4.369117803s)
--- PASS: TestNetworkPlugins/group/bridge/Start (64.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-887220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-887220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-887220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-887220 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-887220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xpprk" [7c542fff-b87b-4f04-ac4a-20a3b4cd434d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xpprk" [7c542fff-b87b-4f04-ac4a-20a3b4cd434d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004351983s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (70.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-887220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-887220 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m10.839911167s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (70.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-887220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-887220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-887220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-5pb52" [b3928747-1f53-406b-838a-6b3a936623ee] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00455795s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-887220 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-887220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7slzp" [a791b3c6-179c-4ed2-8a80-d98ac171502a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7slzp" [a791b3c6-179c-4ed2-8a80-d98ac171502a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004346737s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (101.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-910865 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-910865 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (1m41.172271407s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (101.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-887220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-887220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-887220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-887220 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-887220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-whvns" [97615502-7d27-4684-a8e2-705a8573270a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-whvns" [97615502-7d27-4684-a8e2-705a8573270a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003794517s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-887220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-887220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-887220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-691797 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0915 18:50:38.028349   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-691797 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m7.482631662s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (67.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-870687 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-870687 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m7.022733344s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (67.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-887220 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-887220 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6dvxc" [a6f758fe-25d6-4711-921f-efeb68f16157] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6dvxc" [a6f758fe-25d6-4711-921f-efeb68f16157] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.003942718s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-887220 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-887220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-887220 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)
E0915 18:56:01.649133   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kubenet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:01.655496   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kubenet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:01.666905   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kubenet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:01.688297   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kubenet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:01.729680   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kubenet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:01.811119   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kubenet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:01.972650   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kubenet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:02.294470   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kubenet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:02.936632   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kubenet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:04.217996   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kubenet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:05.570933   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/bridge-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:06.779947   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kubenet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:08.351979   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/enable-default-cni-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:11.901963   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kubenet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:17.923557   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/calico-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:22.144016   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kubenet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:22.868613   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:26.205607   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/skaffold-792816/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:27.726949   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/auto-887220/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-335928 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0915 18:51:30.295625   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/auto-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:51:32.857584   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/auto-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:51:37.979381   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/auto-887220/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-335928 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (39.11211092s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-691797 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8843497c-988d-4169-a7b3-4c2445ae2e5f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0915 18:51:48.220679   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/auto-887220/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [8843497c-988d-4169-a7b3-4c2445ae2e5f] Running
E0915 18:51:53.911164   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/skaffold-792816/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00340004s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-691797 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-691797 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-691797 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-691797 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-691797 --alsologtostderr -v=3: (10.648693129s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-910865 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8941d80f-faf7-4a11-ab66-a0a0eca6911c] Pending
helpers_test.go:344: "busybox" [8941d80f-faf7-4a11-ab66-a0a0eca6911c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8941d80f-faf7-4a11-ab66-a0a0eca6911c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003678476s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-910865 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-870687 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0db36b0e-5acf-48b1-8cdc-78e8ac23bbed] Pending
helpers_test.go:344: "busybox" [0db36b0e-5acf-48b1-8cdc-78e8ac23bbed] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0db36b0e-5acf-48b1-8cdc-78e8ac23bbed] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003489647s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-870687 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-910865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-910865 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-691797 -n no-preload-691797
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-691797 -n no-preload-691797: exit status 7 (128.397936ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-691797 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (263.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-691797 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-691797 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m23.071242969s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-691797 -n no-preload-691797
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (263.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-910865 --alsologtostderr -v=3
E0915 18:52:08.702961   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/auto-887220/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-910865 --alsologtostderr -v=3: (10.861762005s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-335928 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [455d12b3-4557-4441-ab9e-a43bcde4114e] Pending
helpers_test.go:344: "busybox" [455d12b3-4557-4441-ab9e-a43bcde4114e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [455d12b3-4557-4441-ab9e-a43bcde4114e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004502581s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-335928 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-870687 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-870687 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-870687 --alsologtostderr -v=3
E0915 18:52:12.574315   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-870687 --alsologtostderr -v=3: (10.899567634s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-910865 -n old-k8s-version-910865
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-910865 -n old-k8s-version-910865: exit status 7 (93.523541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-910865 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (141.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-910865 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-910865 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m21.11241748s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-910865 -n old-k8s-version-910865
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (141.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-335928 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-335928 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.001946497s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-335928 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-335928 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-335928 --alsologtostderr -v=3: (13.180832895s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-870687 -n embed-certs-870687
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-870687 -n embed-certs-870687: exit status 7 (72.853904ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-870687 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-870687 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-870687 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m27.552984056s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-870687 -n embed-certs-870687
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-335928 -n default-k8s-diff-port-335928
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-335928 -n default-k8s-diff-port-335928: exit status 7 (137.102213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-335928 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-335928 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0915 18:52:32.945002   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kindnet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:52:32.951407   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kindnet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:52:32.962854   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kindnet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:52:32.987887   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kindnet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:52:33.029180   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kindnet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:52:33.114880   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kindnet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:52:33.276205   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kindnet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:52:33.597990   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kindnet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:52:34.239792   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kindnet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:52:35.521562   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kindnet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:52:38.083146   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kindnet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:52:43.204857   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kindnet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:52:49.664490   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/auto-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:52:53.447167   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kindnet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:13.929021   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kindnet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:34.062926   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/calico-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:34.069330   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/calico-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:34.080834   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/calico-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:34.102247   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/calico-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:34.143693   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/calico-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:34.225134   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/calico-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:34.386665   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/calico-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:34.708781   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/calico-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:35.351101   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/calico-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:36.633405   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/calico-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:39.194741   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/calico-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:44.316436   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/calico-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:48.769911   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/custom-flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:48.776303   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/custom-flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:48.787662   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/custom-flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:48.809024   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/custom-flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:48.850784   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/custom-flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:48.932402   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/custom-flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:49.093926   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/custom-flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:49.415428   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/custom-flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:50.057366   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/custom-flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:51.339165   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/custom-flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:53.900569   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/custom-flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:54.558499   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/calico-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:54.891357   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kindnet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:53:59.021934   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/custom-flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:09.263182   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/custom-flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:09.506864   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/functional-852853/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:11.586553   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/auto-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:15.040315   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/calico-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:18.855933   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/false-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:18.862328   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/false-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:18.873721   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/false-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:18.895096   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/false-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:18.936477   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/false-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:19.018077   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/false-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:19.180330   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/false-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:19.502034   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/false-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:20.144015   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/false-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:21.425750   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/false-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:23.987984   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/false-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:29.110079   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/false-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:29.745161   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/custom-flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-335928 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m23.362090445s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-335928 -n default-k8s-diff-port-335928
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-s8ptr" [08bfef4b-ce0d-427c-a18c-18d9d1b3c4e3] Running
E0915 18:54:39.352300   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/false-887220/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003707169s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-s8ptr" [08bfef4b-ce0d-427c-a18c-18d9d1b3c4e3] Running
E0915 18:54:46.414641   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/enable-default-cni-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:46.421005   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/enable-default-cni-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:46.432369   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/enable-default-cni-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:46.453788   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/enable-default-cni-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:46.495171   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/enable-default-cni-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:46.576993   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/enable-default-cni-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:46.738654   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/enable-default-cni-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:47.060199   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/enable-default-cni-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:47.702335   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/enable-default-cni-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:48.983970   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/enable-default-cni-887220/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004366533s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-910865 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-910865 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-910865 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-910865 -n old-k8s-version-910865
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-910865 -n old-k8s-version-910865: exit status 2 (291.46032ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-910865 -n old-k8s-version-910865
E0915 18:54:51.545483   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/enable-default-cni-887220/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-910865 -n old-k8s-version-910865: exit status 2 (299.682361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-910865 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-910865 -n old-k8s-version-910865
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-910865 -n old-k8s-version-910865
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (30.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-762574 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0915 18:54:56.002278   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/calico-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:56.667115   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/enable-default-cni-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:54:59.834024   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/false-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:00.929652   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:00.936050   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:00.947460   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:00.968864   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:01.010337   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:01.091871   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:01.253394   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:01.575032   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:02.216987   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:03.499105   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:06.060873   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:06.908620   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/enable-default-cni-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:10.707380   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/custom-flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:11.182333   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:16.812671   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kindnet-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:21.424359   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:24.593703   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/bridge-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:24.600100   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/bridge-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:24.611525   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/bridge-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:24.632960   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/bridge-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:24.674812   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/bridge-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:24.756222   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/bridge-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:24.917731   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/bridge-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:25.239714   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/bridge-887220/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-762574 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (30.085507702s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (30.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-762574 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0915 18:55:25.881578   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/bridge-887220/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-762574 --alsologtostderr -v=3
E0915 18:55:27.162841   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/bridge-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:27.390471   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/enable-default-cni-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:29.725147   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/bridge-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:34.847319   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/bridge-887220/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-762574 --alsologtostderr -v=3: (10.788449581s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-762574 -n newest-cni-762574
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-762574 -n newest-cni-762574: exit status 7 (68.445955ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-762574 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-762574 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0915 18:55:38.028567   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/addons-924081/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:40.795320   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/false-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:41.906333   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:55:45.089418   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/bridge-887220/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-762574 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (14.163212186s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-762574 -n newest-cni-762574
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-762574 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-762574 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-762574 -n newest-cni-762574
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-762574 -n newest-cni-762574: exit status 2 (292.469909ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-762574 -n newest-cni-762574
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-762574 -n newest-cni-762574: exit status 2 (306.520892ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-762574 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-762574 -n newest-cni-762574
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-762574 -n newest-cni-762574
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5pwlt" [d18af472-6f58-4fb3-aff8-c6ff31c15978] Running
E0915 18:56:32.628862   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/custom-flannel-887220/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003149962s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5pwlt" [d18af472-6f58-4fb3-aff8-c6ff31c15978] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003636924s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-691797 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-691797 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-691797 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-691797 -n no-preload-691797
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-691797 -n no-preload-691797: exit status 2 (296.236594ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-691797 -n no-preload-691797
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-691797 -n no-preload-691797: exit status 2 (290.206119ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-691797 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-691797 -n no-preload-691797
E0915 18:56:42.628239   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/kubenet-887220/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-691797 -n no-preload-691797
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hcq8d" [d0ec989c-63d3-4ce5-aeca-7fbc0eec0538] Running
E0915 18:56:55.428032   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/auto-887220/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004590718s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hcq8d" [d0ec989c-63d3-4ce5-aeca-7fbc0eec0538] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004126374s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-870687 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2n56v" [62db5c5b-092c-4319-a998-68efe92bfd83] Running
E0915 18:56:57.496675   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/old-k8s-version-910865/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:57.503117   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/old-k8s-version-910865/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:57.514550   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/old-k8s-version-910865/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:57.535943   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/old-k8s-version-910865/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:57.577369   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/old-k8s-version-910865/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:57.658872   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/old-k8s-version-910865/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:57.820514   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/old-k8s-version-910865/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:58.142447   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/old-k8s-version-910865/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:56:58.784575   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/old-k8s-version-910865/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:57:00.066269   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/old-k8s-version-910865/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004038376s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-870687 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-870687 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-870687 -n embed-certs-870687
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-870687 -n embed-certs-870687: exit status 2 (288.037787ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-870687 -n embed-certs-870687
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-870687 -n embed-certs-870687: exit status 2 (293.863848ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-870687 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-870687 -n embed-certs-870687
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-870687 -n embed-certs-870687
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2n56v" [62db5c5b-092c-4319-a998-68efe92bfd83] Running
E0915 18:57:02.628461   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/old-k8s-version-910865/client.crt: no such file or directory" logger="UnhandledError"
E0915 18:57:02.716941   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/false-887220/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003807166s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-335928 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-335928 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-335928 --alsologtostderr -v=1
E0915 18:57:07.750626   17950 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-11129/.minikube/profiles/old-k8s-version-910865/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-335928 -n default-k8s-diff-port-335928
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-335928 -n default-k8s-diff-port-335928: exit status 2 (289.006974ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-335928 -n default-k8s-diff-port-335928
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-335928 -n default-k8s-diff-port-335928: exit status 2 (282.244246ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-335928 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-335928 -n default-k8s-diff-port-335928
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-335928 -n default-k8s-diff-port-335928
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.42s)

                                                
                                    

Test skip (20/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (7.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-887220 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-887220

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-887220

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-887220

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-887220

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-887220

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-887220

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-887220

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-887220

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-887220

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-887220

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-887220

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-887220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-887220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-887220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-887220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-887220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-887220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-887220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-887220" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-887220

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-887220

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-887220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-887220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-887220

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-887220

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-887220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-887220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-887220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-887220" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-887220" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-887220

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-887220" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-887220"

                                                
                                                
----------------------- debugLogs end: cilium-887220 [took: 7.285794102s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-887220" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-887220
--- SKIP: TestNetworkPlugins/group/cilium (7.51s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-167864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-167864
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard