Test Report: Docker_Linux 19712

                    
                      c4dd788a1c1ea09a0f3bb20836a8b75126e684b1:2024-09-27:36398
                    
                

Test fail (1/342)

Order failed test Duration
33 TestAddons/parallel/Registry 72.45
x
+
TestAddons/parallel/Registry (72.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.924175ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-zwv8v" [a2abbccc-9f95-4a37-8198-40d424cdcb00] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003951505s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-cf9vr" [c6fffe66-001f-45ee-9860-645249413bc6] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002955232s
addons_test.go:338: (dbg) Run:  kubectl --context addons-393052 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-393052 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-393052 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.078371635s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-393052 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-393052 ip
2024/09/27 17:09:48 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-393052 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-393052
helpers_test.go:235: (dbg) docker inspect addons-393052:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5d2a9be003debcdf45a7fd97909ee6e48ef7d879f79679ba4f994a7485d62cc",
	        "Created": "2024-09-27T16:56:44.99803186Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 19940,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-27T16:56:45.136283674Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fba5f082b59effd6acfcb1eed3d3f86a23bd3a65463877f8197a730d49f52a09",
	        "ResolvConfPath": "/var/lib/docker/containers/d5d2a9be003debcdf45a7fd97909ee6e48ef7d879f79679ba4f994a7485d62cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5d2a9be003debcdf45a7fd97909ee6e48ef7d879f79679ba4f994a7485d62cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5d2a9be003debcdf45a7fd97909ee6e48ef7d879f79679ba4f994a7485d62cc/hosts",
	        "LogPath": "/var/lib/docker/containers/d5d2a9be003debcdf45a7fd97909ee6e48ef7d879f79679ba4f994a7485d62cc/d5d2a9be003debcdf45a7fd97909ee6e48ef7d879f79679ba4f994a7485d62cc-json.log",
	        "Name": "/addons-393052",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-393052:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-393052",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/647bd1ff72e872b238c16e5b34133057689ccdc550e6171dbd88848104f1735b-init/diff:/var/lib/docker/overlay2/07aafddbc91948fc974daae0282e6f765d5d278c360494fef86bc918a6f5c340/diff",
	                "MergedDir": "/var/lib/docker/overlay2/647bd1ff72e872b238c16e5b34133057689ccdc550e6171dbd88848104f1735b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/647bd1ff72e872b238c16e5b34133057689ccdc550e6171dbd88848104f1735b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/647bd1ff72e872b238c16e5b34133057689ccdc550e6171dbd88848104f1735b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-393052",
	                "Source": "/var/lib/docker/volumes/addons-393052/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-393052",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-393052",
	                "name.minikube.sigs.k8s.io": "addons-393052",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ef0aaecfccaec402fecc5e09209a1e3ffac1a1b59e99154c3f9931f652ef8d36",
	            "SandboxKey": "/var/run/docker/netns/ef0aaecfccae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-393052": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "29b350c6cce9510e7a341cef59e1e5cd262b8e0fe65bf06d16edae0ed7d76cc3",
	                    "EndpointID": "0401b811335a10383eee7c1ee6b2668204b5781c11c6da8b4b0713981f6998b0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-393052",
	                        "d5d2a9be003d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-393052 -n addons-393052
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-393052 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-229474 | jenkins | v1.34.0 | 27 Sep 24 16:56 UTC |                     |
	|         | download-docker-229474                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-229474                                                                   | download-docker-229474 | jenkins | v1.34.0 | 27 Sep 24 16:56 UTC | 27 Sep 24 16:56 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-269269   | jenkins | v1.34.0 | 27 Sep 24 16:56 UTC |                     |
	|         | binary-mirror-269269                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36853                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-269269                                                                     | binary-mirror-269269   | jenkins | v1.34.0 | 27 Sep 24 16:56 UTC | 27 Sep 24 16:56 UTC |
	| addons  | enable dashboard -p                                                                         | addons-393052          | jenkins | v1.34.0 | 27 Sep 24 16:56 UTC |                     |
	|         | addons-393052                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-393052          | jenkins | v1.34.0 | 27 Sep 24 16:56 UTC |                     |
	|         | addons-393052                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-393052 --wait=true                                                                | addons-393052          | jenkins | v1.34.0 | 27 Sep 24 16:56 UTC | 27 Sep 24 16:59 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-393052 addons disable                                                                | addons-393052          | jenkins | v1.34.0 | 27 Sep 24 17:00 UTC | 27 Sep 24 17:00 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-393052          | jenkins | v1.34.0 | 27 Sep 24 17:08 UTC | 27 Sep 24 17:08 UTC |
	|         | -p addons-393052                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-393052 addons                                                                        | addons-393052          | jenkins | v1.34.0 | 27 Sep 24 17:08 UTC | 27 Sep 24 17:08 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-393052          | jenkins | v1.34.0 | 27 Sep 24 17:08 UTC | 27 Sep 24 17:08 UTC |
	|         | -p addons-393052                                                                            |                        |         |         |                     |                     |
	| addons  | addons-393052 addons disable                                                                | addons-393052          | jenkins | v1.34.0 | 27 Sep 24 17:08 UTC | 27 Sep 24 17:08 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-393052 addons disable                                                                | addons-393052          | jenkins | v1.34.0 | 27 Sep 24 17:08 UTC | 27 Sep 24 17:09 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-393052          | jenkins | v1.34.0 | 27 Sep 24 17:08 UTC | 27 Sep 24 17:09 UTC |
	|         | addons-393052                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-393052 ssh curl -s                                                                   | addons-393052          | jenkins | v1.34.0 | 27 Sep 24 17:09 UTC | 27 Sep 24 17:09 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-393052 ip                                                                            | addons-393052          | jenkins | v1.34.0 | 27 Sep 24 17:09 UTC | 27 Sep 24 17:09 UTC |
	| addons  | addons-393052 addons disable                                                                | addons-393052          | jenkins | v1.34.0 | 27 Sep 24 17:09 UTC | 27 Sep 24 17:09 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-393052 addons disable                                                                | addons-393052          | jenkins | v1.34.0 | 27 Sep 24 17:09 UTC | 27 Sep 24 17:09 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| ssh     | addons-393052 ssh cat                                                                       | addons-393052          | jenkins | v1.34.0 | 27 Sep 24 17:09 UTC | 27 Sep 24 17:09 UTC |
	|         | /opt/local-path-provisioner/pvc-cc992d79-9229-48dc-815e-b7a98bf6633a_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-393052 addons disable                                                                | addons-393052          | jenkins | v1.34.0 | 27 Sep 24 17:09 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-393052 addons                                                                        | addons-393052          | jenkins | v1.34.0 | 27 Sep 24 17:09 UTC | 27 Sep 24 17:09 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-393052 addons                                                                        | addons-393052          | jenkins | v1.34.0 | 27 Sep 24 17:09 UTC | 27 Sep 24 17:09 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-393052          | jenkins | v1.34.0 | 27 Sep 24 17:09 UTC | 27 Sep 24 17:09 UTC |
	|         | addons-393052                                                                               |                        |         |         |                     |                     |
	| ip      | addons-393052 ip                                                                            | addons-393052          | jenkins | v1.34.0 | 27 Sep 24 17:09 UTC | 27 Sep 24 17:09 UTC |
	| addons  | addons-393052 addons disable                                                                | addons-393052          | jenkins | v1.34.0 | 27 Sep 24 17:09 UTC | 27 Sep 24 17:09 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 16:56:23
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 16:56:23.041893   19184 out.go:345] Setting OutFile to fd 1 ...
	I0927 16:56:23.042012   19184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 16:56:23.042020   19184 out.go:358] Setting ErrFile to fd 2...
	I0927 16:56:23.042024   19184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 16:56:23.042197   19184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11000/.minikube/bin
	I0927 16:56:23.042804   19184 out.go:352] Setting JSON to false
	I0927 16:56:23.043601   19184 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2330,"bootTime":1727453853,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 16:56:23.043686   19184 start.go:139] virtualization: kvm guest
	I0927 16:56:23.046247   19184 out.go:177] * [addons-393052] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 16:56:23.047874   19184 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 16:56:23.047862   19184 notify.go:220] Checking for updates...
	I0927 16:56:23.049415   19184 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 16:56:23.050953   19184 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-11000/kubeconfig
	I0927 16:56:23.052242   19184 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11000/.minikube
	I0927 16:56:23.053400   19184 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 16:56:23.054505   19184 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 16:56:23.055901   19184 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 16:56:23.077608   19184 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 16:56:23.077748   19184 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 16:56:23.120754   19184 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-27 16:56:23.111265961 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647923200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0927 16:56:23.120848   19184 docker.go:318] overlay module found
	I0927 16:56:23.123846   19184 out.go:177] * Using the docker driver based on user configuration
	I0927 16:56:23.125210   19184 start.go:297] selected driver: docker
	I0927 16:56:23.125222   19184 start.go:901] validating driver "docker" against <nil>
	I0927 16:56:23.125232   19184 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 16:56:23.125951   19184 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 16:56:23.168344   19184 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-27 16:56:23.160302202 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647923200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0927 16:56:23.168510   19184 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 16:56:23.168741   19184 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 16:56:23.170744   19184 out.go:177] * Using Docker driver with root privileges
	I0927 16:56:23.172022   19184 cni.go:84] Creating CNI manager for ""
	I0927 16:56:23.172078   19184 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 16:56:23.172090   19184 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 16:56:23.172147   19184 start.go:340] cluster config:
	{Name:addons-393052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-393052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 16:56:23.173423   19184 out.go:177] * Starting "addons-393052" primary control-plane node in "addons-393052" cluster
	I0927 16:56:23.174643   19184 cache.go:121] Beginning downloading kic base image for docker with docker
	I0927 16:56:23.176098   19184 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0927 16:56:23.177422   19184 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 16:56:23.177454   19184 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-11000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0927 16:56:23.177464   19184 cache.go:56] Caching tarball of preloaded images
	I0927 16:56:23.177479   19184 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 16:56:23.177543   19184 preload.go:172] Found /home/jenkins/minikube-integration/19712-11000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0927 16:56:23.177557   19184 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 16:56:23.177893   19184 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/config.json ...
	I0927 16:56:23.177916   19184 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/config.json: {Name:mkbd0ad00dfb1ae3fc1d82796007a185fb4418ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:56:23.193357   19184 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 16:56:23.193481   19184 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0927 16:56:23.193501   19184 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0927 16:56:23.193520   19184 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0927 16:56:23.193530   19184 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0927 16:56:23.193534   19184 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from local cache
	I0927 16:56:35.524470   19184 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from cached tarball
	I0927 16:56:35.524503   19184 cache.go:194] Successfully downloaded all kic artifacts
	I0927 16:56:35.524544   19184 start.go:360] acquireMachinesLock for addons-393052: {Name:mk76b81b630d238d72bd0c6329943de60470620a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 16:56:35.524647   19184 start.go:364] duration metric: took 80.309µs to acquireMachinesLock for "addons-393052"
	I0927 16:56:35.524674   19184 start.go:93] Provisioning new machine with config: &{Name:addons-393052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-393052 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 16:56:35.524769   19184 start.go:125] createHost starting for "" (driver="docker")
	I0927 16:56:35.526715   19184 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0927 16:56:35.526949   19184 start.go:159] libmachine.API.Create for "addons-393052" (driver="docker")
	I0927 16:56:35.526985   19184 client.go:168] LocalClient.Create starting
	I0927 16:56:35.527074   19184 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19712-11000/.minikube/certs/ca.pem
	I0927 16:56:35.789290   19184 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19712-11000/.minikube/certs/cert.pem
	I0927 16:56:35.962922   19184 cli_runner.go:164] Run: docker network inspect addons-393052 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0927 16:56:35.978945   19184 cli_runner.go:211] docker network inspect addons-393052 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0927 16:56:35.979027   19184 network_create.go:284] running [docker network inspect addons-393052] to gather additional debugging logs...
	I0927 16:56:35.979046   19184 cli_runner.go:164] Run: docker network inspect addons-393052
	W0927 16:56:35.994441   19184 cli_runner.go:211] docker network inspect addons-393052 returned with exit code 1
	I0927 16:56:35.994470   19184 network_create.go:287] error running [docker network inspect addons-393052]: docker network inspect addons-393052: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-393052 not found
	I0927 16:56:35.994481   19184 network_create.go:289] output of [docker network inspect addons-393052]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-393052 not found
	
	** /stderr **
	I0927 16:56:35.994551   19184 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0927 16:56:36.011145   19184 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b0a780}
	I0927 16:56:36.011190   19184 network_create.go:124] attempt to create docker network addons-393052 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0927 16:56:36.011232   19184 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-393052 addons-393052
	I0927 16:56:36.071647   19184 network_create.go:108] docker network addons-393052 192.168.49.0/24 created
	I0927 16:56:36.071672   19184 kic.go:121] calculated static IP "192.168.49.2" for the "addons-393052" container
	I0927 16:56:36.071740   19184 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0927 16:56:36.086961   19184 cli_runner.go:164] Run: docker volume create addons-393052 --label name.minikube.sigs.k8s.io=addons-393052 --label created_by.minikube.sigs.k8s.io=true
	I0927 16:56:36.104067   19184 oci.go:103] Successfully created a docker volume addons-393052
	I0927 16:56:36.104138   19184 cli_runner.go:164] Run: docker run --rm --name addons-393052-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-393052 --entrypoint /usr/bin/test -v addons-393052:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
	I0927 16:56:41.040677   19184 cli_runner.go:217] Completed: docker run --rm --name addons-393052-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-393052 --entrypoint /usr/bin/test -v addons-393052:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib: (4.936497694s)
	I0927 16:56:41.040703   19184 oci.go:107] Successfully prepared a docker volume addons-393052
	I0927 16:56:41.040728   19184 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 16:56:41.040746   19184 kic.go:194] Starting extracting preloaded images to volume ...
	I0927 16:56:41.040811   19184 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19712-11000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-393052:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
	I0927 16:56:44.939125   19184 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19712-11000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-393052:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (3.898270232s)
	I0927 16:56:44.939164   19184 kic.go:203] duration metric: took 3.898415134s to extract preloaded images to volume ...
	W0927 16:56:44.939291   19184 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0927 16:56:44.939451   19184 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0927 16:56:44.983685   19184 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-393052 --name addons-393052 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-393052 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-393052 --network addons-393052 --ip 192.168.49.2 --volume addons-393052:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
	I0927 16:56:45.309682   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Running}}
	I0927 16:56:45.327236   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Status}}
	I0927 16:56:45.345304   19184 cli_runner.go:164] Run: docker exec addons-393052 stat /var/lib/dpkg/alternatives/iptables
	I0927 16:56:45.388869   19184 oci.go:144] the created container "addons-393052" has a running status.
	I0927 16:56:45.388904   19184 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa...
	I0927 16:56:45.458588   19184 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0927 16:56:45.478126   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Status}}
	I0927 16:56:45.494695   19184 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0927 16:56:45.494720   19184 kic_runner.go:114] Args: [docker exec --privileged addons-393052 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0927 16:56:45.535613   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Status}}
	I0927 16:56:45.556753   19184 machine.go:93] provisionDockerMachine start ...
	I0927 16:56:45.556831   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:56:45.574233   19184 main.go:141] libmachine: Using SSH client type: native
	I0927 16:56:45.574444   19184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0927 16:56:45.574463   19184 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 16:56:45.575178   19184 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43602->127.0.0.1:32768: read: connection reset by peer
	I0927 16:56:48.687163   19184 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-393052
	
	I0927 16:56:48.687190   19184 ubuntu.go:169] provisioning hostname "addons-393052"
	I0927 16:56:48.687255   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:56:48.703939   19184 main.go:141] libmachine: Using SSH client type: native
	I0927 16:56:48.704122   19184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0927 16:56:48.704136   19184 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-393052 && echo "addons-393052" | sudo tee /etc/hostname
	I0927 16:56:48.827036   19184 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-393052
	
	I0927 16:56:48.827106   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:56:48.845174   19184 main.go:141] libmachine: Using SSH client type: native
	I0927 16:56:48.845351   19184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0927 16:56:48.845373   19184 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-393052' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-393052/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-393052' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 16:56:48.955712   19184 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 16:56:48.955739   19184 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11000/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11000/.minikube}
	I0927 16:56:48.955777   19184 ubuntu.go:177] setting up certificates
	I0927 16:56:48.955789   19184 provision.go:84] configureAuth start
	I0927 16:56:48.955860   19184 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-393052
	I0927 16:56:48.972915   19184 provision.go:143] copyHostCerts
	I0927 16:56:48.972982   19184 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11000/.minikube/ca.pem (1082 bytes)
	I0927 16:56:48.973086   19184 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11000/.minikube/cert.pem (1123 bytes)
	I0927 16:56:48.973146   19184 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11000/.minikube/key.pem (1675 bytes)
	I0927 16:56:48.973192   19184 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11000/.minikube/certs/ca-key.pem org=jenkins.addons-393052 san=[127.0.0.1 192.168.49.2 addons-393052 localhost minikube]
	I0927 16:56:49.122491   19184 provision.go:177] copyRemoteCerts
	I0927 16:56:49.122544   19184 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 16:56:49.122576   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:56:49.139173   19184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa Username:docker}
	I0927 16:56:49.224133   19184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 16:56:49.245283   19184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 16:56:49.266888   19184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11000/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 16:56:49.288183   19184 provision.go:87] duration metric: took 332.383022ms to configureAuth
	I0927 16:56:49.288206   19184 ubuntu.go:193] setting minikube options for container-runtime
	I0927 16:56:49.288378   19184 config.go:182] Loaded profile config "addons-393052": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 16:56:49.288425   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:56:49.306475   19184 main.go:141] libmachine: Using SSH client type: native
	I0927 16:56:49.306645   19184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0927 16:56:49.306658   19184 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0927 16:56:49.419927   19184 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0927 16:56:49.419947   19184 ubuntu.go:71] root file system type: overlay
	I0927 16:56:49.420083   19184 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0927 16:56:49.420145   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:56:49.436581   19184 main.go:141] libmachine: Using SSH client type: native
	I0927 16:56:49.436755   19184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0927 16:56:49.436811   19184 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0927 16:56:49.558038   19184 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0927 16:56:49.558103   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:56:49.575079   19184 main.go:141] libmachine: Using SSH client type: native
	I0927 16:56:49.575259   19184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0927 16:56:49.575283   19184 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0927 16:56:50.242733   19184 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-20 11:39:29.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-27 16:56:49.552734551 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0927 16:56:50.242781   19184 machine.go:96] duration metric: took 4.686005988s to provisionDockerMachine
	I0927 16:56:50.242794   19184 client.go:171] duration metric: took 14.715798889s to LocalClient.Create
	I0927 16:56:50.242816   19184 start.go:167] duration metric: took 14.715867705s to libmachine.API.Create "addons-393052"
	I0927 16:56:50.242825   19184 start.go:293] postStartSetup for "addons-393052" (driver="docker")
	I0927 16:56:50.242839   19184 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 16:56:50.242905   19184 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 16:56:50.242950   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:56:50.261237   19184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa Username:docker}
	I0927 16:56:50.348679   19184 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 16:56:50.351807   19184 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0927 16:56:50.351864   19184 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0927 16:56:50.351878   19184 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0927 16:56:50.351885   19184 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0927 16:56:50.351898   19184 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11000/.minikube/addons for local assets ...
	I0927 16:56:50.351954   19184 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11000/.minikube/files for local assets ...
	I0927 16:56:50.351976   19184 start.go:296] duration metric: took 109.144075ms for postStartSetup
	I0927 16:56:50.352240   19184 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-393052
	I0927 16:56:50.369532   19184 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/config.json ...
	I0927 16:56:50.369784   19184 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 16:56:50.369820   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:56:50.387042   19184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa Username:docker}
	I0927 16:56:50.468416   19184 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0927 16:56:50.472285   19184 start.go:128] duration metric: took 14.947499497s to createHost
	I0927 16:56:50.472310   19184 start.go:83] releasing machines lock for "addons-393052", held for 14.947650151s
	I0927 16:56:50.472358   19184 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-393052
	I0927 16:56:50.488074   19184 ssh_runner.go:195] Run: cat /version.json
	I0927 16:56:50.488142   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:56:50.488172   19184 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 16:56:50.488231   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:56:50.505699   19184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa Username:docker}
	I0927 16:56:50.506192   19184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa Username:docker}
	I0927 16:56:50.653614   19184 ssh_runner.go:195] Run: systemctl --version
	I0927 16:56:50.657569   19184 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0927 16:56:50.661330   19184 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0927 16:56:50.682205   19184 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0927 16:56:50.682264   19184 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 16:56:50.706829   19184 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0927 16:56:50.706852   19184 start.go:495] detecting cgroup driver to use...
	I0927 16:56:50.706878   19184 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0927 16:56:50.707016   19184 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 16:56:50.721474   19184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0927 16:56:50.729866   19184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0927 16:56:50.738081   19184 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0927 16:56:50.738129   19184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0927 16:56:50.746502   19184 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 16:56:50.754635   19184 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0927 16:56:50.762755   19184 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 16:56:50.771033   19184 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 16:56:50.778573   19184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0927 16:56:50.787005   19184 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0927 16:56:50.795442   19184 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0927 16:56:50.803809   19184 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 16:56:50.811477   19184 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 16:56:50.811536   19184 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 16:56:50.823746   19184 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 16:56:50.831598   19184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 16:56:50.906850   19184 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0927 16:56:50.994507   19184 start.go:495] detecting cgroup driver to use...
	I0927 16:56:50.994624   19184 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0927 16:56:50.994702   19184 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0927 16:56:51.005818   19184 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0927 16:56:51.005871   19184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0927 16:56:51.016524   19184 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 16:56:51.031985   19184 ssh_runner.go:195] Run: which cri-dockerd
	I0927 16:56:51.035419   19184 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0927 16:56:51.044130   19184 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0927 16:56:51.061379   19184 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0927 16:56:51.161104   19184 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0927 16:56:51.251878   19184 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0927 16:56:51.252039   19184 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0927 16:56:51.268670   19184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 16:56:51.347572   19184 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0927 16:56:51.604760   19184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0927 16:56:51.615620   19184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0927 16:56:51.626003   19184 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0927 16:56:51.694128   19184 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0927 16:56:51.758882   19184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 16:56:51.835240   19184 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0927 16:56:51.848127   19184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0927 16:56:51.858380   19184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 16:56:51.930642   19184 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0927 16:56:51.988443   19184 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0927 16:56:51.988536   19184 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0927 16:56:51.991887   19184 start.go:563] Will wait 60s for crictl version
	I0927 16:56:51.991941   19184 ssh_runner.go:195] Run: which crictl
	I0927 16:56:51.994920   19184 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 16:56:52.026459   19184 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0927 16:56:52.026526   19184 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0927 16:56:52.050506   19184 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0927 16:56:52.076494   19184 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0927 16:56:52.076580   19184 cli_runner.go:164] Run: docker network inspect addons-393052 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0927 16:56:52.092356   19184 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0927 16:56:52.095636   19184 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 16:56:52.105384   19184 kubeadm.go:883] updating cluster {Name:addons-393052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-393052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 16:56:52.105516   19184 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 16:56:52.105564   19184 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0927 16:56:52.123112   19184 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0927 16:56:52.123136   19184 docker.go:615] Images already preloaded, skipping extraction
	I0927 16:56:52.123191   19184 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0927 16:56:52.141495   19184 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0927 16:56:52.141517   19184 cache_images.go:84] Images are preloaded, skipping loading
	I0927 16:56:52.141526   19184 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0927 16:56:52.141616   19184 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-393052 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-393052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 16:56:52.141675   19184 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0927 16:56:52.183092   19184 cni.go:84] Creating CNI manager for ""
	I0927 16:56:52.183185   19184 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 16:56:52.183213   19184 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 16:56:52.183236   19184 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-393052 NodeName:addons-393052 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 16:56:52.183394   19184 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-393052"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 16:56:52.183447   19184 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 16:56:52.191367   19184 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 16:56:52.191430   19184 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 16:56:52.198956   19184 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0927 16:56:52.214729   19184 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 16:56:52.230357   19184 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0927 16:56:52.246408   19184 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0927 16:56:52.249491   19184 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 16:56:52.259165   19184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 16:56:52.331983   19184 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 16:56:52.344550   19184 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052 for IP: 192.168.49.2
	I0927 16:56:52.344576   19184 certs.go:194] generating shared ca certs ...
	I0927 16:56:52.344598   19184 certs.go:226] acquiring lock for ca certs: {Name:mkd25fe85444030cbf67ced728971fd02eb485dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:56:52.344733   19184 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-11000/.minikube/ca.key
	I0927 16:56:52.571141   19184 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11000/.minikube/ca.crt ...
	I0927 16:56:52.571175   19184 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11000/.minikube/ca.crt: {Name:mkcb22492198b5cad830e9ee5611a2f70b6b93d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:56:52.571354   19184 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11000/.minikube/ca.key ...
	I0927 16:56:52.571368   19184 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11000/.minikube/ca.key: {Name:mk1265316919f7c1fb39689d179eb43baad065f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:56:52.571448   19184 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-11000/.minikube/proxy-client-ca.key
	I0927 16:56:52.847405   19184 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11000/.minikube/proxy-client-ca.crt ...
	I0927 16:56:52.847434   19184 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11000/.minikube/proxy-client-ca.crt: {Name:mk1e2d1750f18ff3124df8961c3fb9ad29110ae8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:56:52.847595   19184 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11000/.minikube/proxy-client-ca.key ...
	I0927 16:56:52.847606   19184 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11000/.minikube/proxy-client-ca.key: {Name:mkf96dc17706b23890963f994375368646d2a5c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:56:52.847671   19184 certs.go:256] generating profile certs ...
	I0927 16:56:52.847720   19184 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.key
	I0927 16:56:52.847741   19184 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt with IP's: []
	I0927 16:56:52.906945   19184 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt ...
	I0927 16:56:52.906975   19184 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: {Name:mk741acd7673595c354ab90664e34dbc738b6931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:56:52.907128   19184 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.key ...
	I0927 16:56:52.907138   19184 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.key: {Name:mk90ed6a0348fe77105422fd687ce12a79cb36a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:56:52.907215   19184 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/apiserver.key.3ccfd323
	I0927 16:56:52.907242   19184 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/apiserver.crt.3ccfd323 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0927 16:56:53.112191   19184 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/apiserver.crt.3ccfd323 ...
	I0927 16:56:53.112220   19184 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/apiserver.crt.3ccfd323: {Name:mk282fcefbb8a343ff90431b321a2d87b93f0098 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:56:53.112385   19184 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/apiserver.key.3ccfd323 ...
	I0927 16:56:53.112397   19184 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/apiserver.key.3ccfd323: {Name:mk704f833ac9cc29768857026bc41b3ac258099d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:56:53.112471   19184 certs.go:381] copying /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/apiserver.crt.3ccfd323 -> /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/apiserver.crt
	I0927 16:56:53.112558   19184 certs.go:385] copying /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/apiserver.key.3ccfd323 -> /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/apiserver.key
	I0927 16:56:53.112607   19184 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/proxy-client.key
	I0927 16:56:53.112625   19184 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/proxy-client.crt with IP's: []
	I0927 16:56:53.363664   19184 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/proxy-client.crt ...
	I0927 16:56:53.363702   19184 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/proxy-client.crt: {Name:mk280f3360e5a1c3e4e5287e9768126f6aeca5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:56:53.363875   19184 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/proxy-client.key ...
	I0927 16:56:53.363887   19184 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/proxy-client.key: {Name:mka8f1afe152ef96a017e493b2215a8ccfa6efa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:56:53.364048   19184 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11000/.minikube/certs/ca-key.pem (1679 bytes)
	I0927 16:56:53.364082   19184 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11000/.minikube/certs/ca.pem (1082 bytes)
	I0927 16:56:53.364105   19184 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11000/.minikube/certs/cert.pem (1123 bytes)
	I0927 16:56:53.364124   19184 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11000/.minikube/certs/key.pem (1675 bytes)
	I0927 16:56:53.364729   19184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 16:56:53.387304   19184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 16:56:53.408714   19184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 16:56:53.430008   19184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 16:56:53.450894   19184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0927 16:56:53.472116   19184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 16:56:53.492993   19184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 16:56:53.513688   19184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 16:56:53.535264   19184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 16:56:53.557241   19184 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 16:56:53.573450   19184 ssh_runner.go:195] Run: openssl version
	I0927 16:56:53.578919   19184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 16:56:53.587651   19184 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 16:56:53.590962   19184 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0927 16:56:53.591020   19184 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 16:56:53.597268   19184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 16:56:53.605768   19184 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 16:56:53.608957   19184 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 16:56:53.609001   19184 kubeadm.go:392] StartCluster: {Name:addons-393052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-393052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 16:56:53.609105   19184 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0927 16:56:53.625602   19184 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 16:56:53.633598   19184 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 16:56:53.641160   19184 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0927 16:56:53.641206   19184 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 16:56:53.648777   19184 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 16:56:53.648794   19184 kubeadm.go:157] found existing configuration files:
	
	I0927 16:56:53.648832   19184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 16:56:53.656376   19184 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 16:56:53.656434   19184 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 16:56:53.663638   19184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 16:56:53.670775   19184 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 16:56:53.670831   19184 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 16:56:53.678054   19184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 16:56:53.685553   19184 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 16:56:53.685597   19184 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 16:56:53.692954   19184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 16:56:53.700561   19184 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 16:56:53.700612   19184 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 16:56:53.708378   19184 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0927 16:56:53.745244   19184 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 16:56:53.745315   19184 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 16:56:53.764370   19184 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0927 16:56:53.764448   19184 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0927 16:56:53.764516   19184 kubeadm.go:310] OS: Linux
	I0927 16:56:53.764594   19184 kubeadm.go:310] CGROUPS_CPU: enabled
	I0927 16:56:53.764666   19184 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0927 16:56:53.764755   19184 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0927 16:56:53.764841   19184 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0927 16:56:53.764912   19184 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0927 16:56:53.764985   19184 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0927 16:56:53.765029   19184 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0927 16:56:53.765072   19184 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0927 16:56:53.765122   19184 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0927 16:56:53.814416   19184 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 16:56:53.814549   19184 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 16:56:53.814650   19184 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 16:56:53.824272   19184 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 16:56:53.827744   19184 out.go:235]   - Generating certificates and keys ...
	I0927 16:56:53.827882   19184 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 16:56:53.827975   19184 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 16:56:53.876477   19184 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 16:56:53.986886   19184 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 16:56:54.068207   19184 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 16:56:54.167150   19184 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 16:56:54.372866   19184 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 16:56:54.373139   19184 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-393052 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0927 16:56:54.524859   19184 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 16:56:54.524987   19184 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-393052 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0927 16:56:54.641105   19184 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 16:56:54.769201   19184 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 16:56:54.826534   19184 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 16:56:54.826614   19184 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 16:56:55.129792   19184 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 16:56:55.324701   19184 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 16:56:55.644773   19184 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 16:56:55.819100   19184 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 16:56:55.966705   19184 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 16:56:55.967184   19184 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 16:56:55.970751   19184 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 16:56:55.973159   19184 out.go:235]   - Booting up control plane ...
	I0927 16:56:55.973262   19184 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 16:56:55.973347   19184 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 16:56:55.973434   19184 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 16:56:55.981959   19184 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 16:56:55.986928   19184 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 16:56:55.986998   19184 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 16:56:56.068998   19184 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 16:56:56.069106   19184 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 16:56:57.070284   19184 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001306231s
	I0927 16:56:57.070412   19184 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 16:57:01.572390   19184 kubeadm.go:310] [api-check] The API server is healthy after 4.5021015s
	I0927 16:57:01.583302   19184 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 16:57:01.595007   19184 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 16:57:01.612241   19184 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 16:57:01.612467   19184 kubeadm.go:310] [mark-control-plane] Marking the node addons-393052 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 16:57:01.619602   19184 kubeadm.go:310] [bootstrap-token] Using token: d7xb9t.h50rej78m8xehro2
	I0927 16:57:01.620983   19184 out.go:235]   - Configuring RBAC rules ...
	I0927 16:57:01.621121   19184 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 16:57:01.624327   19184 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 16:57:01.629349   19184 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 16:57:01.631739   19184 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 16:57:01.634030   19184 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 16:57:01.637229   19184 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 16:57:01.979248   19184 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 16:57:02.395946   19184 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 16:57:02.979106   19184 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 16:57:02.980324   19184 kubeadm.go:310] 
	I0927 16:57:02.980434   19184 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 16:57:02.980452   19184 kubeadm.go:310] 
	I0927 16:57:02.980608   19184 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 16:57:02.980629   19184 kubeadm.go:310] 
	I0927 16:57:02.980652   19184 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 16:57:02.980731   19184 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 16:57:02.980798   19184 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 16:57:02.980811   19184 kubeadm.go:310] 
	I0927 16:57:02.980883   19184 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 16:57:02.980896   19184 kubeadm.go:310] 
	I0927 16:57:02.980961   19184 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 16:57:02.980974   19184 kubeadm.go:310] 
	I0927 16:57:02.981058   19184 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 16:57:02.981157   19184 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 16:57:02.981255   19184 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 16:57:02.981265   19184 kubeadm.go:310] 
	I0927 16:57:02.981369   19184 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 16:57:02.981473   19184 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 16:57:02.981487   19184 kubeadm.go:310] 
	I0927 16:57:02.981605   19184 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token d7xb9t.h50rej78m8xehro2 \
	I0927 16:57:02.981744   19184 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a9a8e80ffcda2bf426564b2745740303b3b235efa524a72a22fa76d38bf67b20 \
	I0927 16:57:02.981780   19184 kubeadm.go:310] 	--control-plane 
	I0927 16:57:02.981786   19184 kubeadm.go:310] 
	I0927 16:57:02.981894   19184 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 16:57:02.981905   19184 kubeadm.go:310] 
	I0927 16:57:02.982017   19184 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token d7xb9t.h50rej78m8xehro2 \
	I0927 16:57:02.982135   19184 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a9a8e80ffcda2bf426564b2745740303b3b235efa524a72a22fa76d38bf67b20 
	I0927 16:57:02.985632   19184 kubeadm.go:310] W0927 16:56:53.742656    1930 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 16:57:02.986034   19184 kubeadm.go:310] W0927 16:56:53.743285    1930 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 16:57:02.986309   19184 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0927 16:57:02.986411   19184 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 16:57:02.986459   19184 cni.go:84] Creating CNI manager for ""
	I0927 16:57:02.986481   19184 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 16:57:02.988523   19184 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 16:57:02.989936   19184 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 16:57:02.998343   19184 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 16:57:03.015727   19184 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 16:57:03.015868   19184 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-393052 minikube.k8s.io/updated_at=2024_09_27T16_57_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c minikube.k8s.io/name=addons-393052 minikube.k8s.io/primary=true
	I0927 16:57:03.015877   19184 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 16:57:03.082378   19184 ops.go:34] apiserver oom_adj: -16
	I0927 16:57:03.082455   19184 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 16:57:03.582722   19184 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 16:57:04.082711   19184 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 16:57:04.582742   19184 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 16:57:05.083508   19184 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 16:57:05.582517   19184 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 16:57:06.082546   19184 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 16:57:06.582601   19184 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 16:57:07.082927   19184 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 16:57:07.583200   19184 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 16:57:07.668080   19184 kubeadm.go:1113] duration metric: took 4.652271388s to wait for elevateKubeSystemPrivileges
	I0927 16:57:07.668110   19184 kubeadm.go:394] duration metric: took 14.059110934s to StartCluster
	I0927 16:57:07.668131   19184 settings.go:142] acquiring lock: {Name:mkc0314a2dd35aa6c5a0e7084cda267c179ed285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:57:07.668252   19184 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19712-11000/kubeconfig
	I0927 16:57:07.668695   19184 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11000/kubeconfig: {Name:mkfa8457d11960c7d13127b721ec2fe44d5a2e53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:57:07.668926   19184 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 16:57:07.668952   19184 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 16:57:07.669007   19184 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0927 16:57:07.669119   19184 addons.go:69] Setting yakd=true in profile "addons-393052"
	I0927 16:57:07.669142   19184 addons.go:234] Setting addon yakd=true in "addons-393052"
	I0927 16:57:07.669137   19184 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-393052"
	I0927 16:57:07.669144   19184 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-393052"
	I0927 16:57:07.669161   19184 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-393052"
	I0927 16:57:07.669171   19184 host.go:66] Checking if "addons-393052" exists ...
	I0927 16:57:07.669184   19184 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-393052"
	I0927 16:57:07.669195   19184 config.go:182] Loaded profile config "addons-393052": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 16:57:07.669205   19184 addons.go:69] Setting gcp-auth=true in profile "addons-393052"
	I0927 16:57:07.669217   19184 mustload.go:65] Loading cluster: addons-393052
	I0927 16:57:07.669198   19184 host.go:66] Checking if "addons-393052" exists ...
	I0927 16:57:07.669340   19184 config.go:182] Loaded profile config "addons-393052": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 16:57:07.669507   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Status}}
	I0927 16:57:07.669516   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Status}}
	I0927 16:57:07.669637   19184 addons.go:69] Setting default-storageclass=true in profile "addons-393052"
	I0927 16:57:07.669655   19184 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-393052"
	I0927 16:57:07.669682   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Status}}
	I0927 16:57:07.669696   19184 addons.go:69] Setting volcano=true in profile "addons-393052"
	I0927 16:57:07.669715   19184 addons.go:234] Setting addon volcano=true in "addons-393052"
	I0927 16:57:07.669738   19184 host.go:66] Checking if "addons-393052" exists ...
	I0927 16:57:07.669861   19184 addons.go:69] Setting ingress-dns=true in profile "addons-393052"
	I0927 16:57:07.669882   19184 addons.go:234] Setting addon ingress-dns=true in "addons-393052"
	I0927 16:57:07.669906   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Status}}
	I0927 16:57:07.669918   19184 host.go:66] Checking if "addons-393052" exists ...
	I0927 16:57:07.669935   19184 addons.go:69] Setting ingress=true in profile "addons-393052"
	I0927 16:57:07.669956   19184 addons.go:234] Setting addon ingress=true in "addons-393052"
	I0927 16:57:07.669988   19184 host.go:66] Checking if "addons-393052" exists ...
	I0927 16:57:07.670142   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Status}}
	I0927 16:57:07.670221   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Status}}
	I0927 16:57:07.670390   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Status}}
	I0927 16:57:07.670491   19184 addons.go:69] Setting volumesnapshots=true in profile "addons-393052"
	I0927 16:57:07.670523   19184 addons.go:234] Setting addon volumesnapshots=true in "addons-393052"
	I0927 16:57:07.670559   19184 host.go:66] Checking if "addons-393052" exists ...
	I0927 16:57:07.670589   19184 addons.go:69] Setting inspektor-gadget=true in profile "addons-393052"
	I0927 16:57:07.669686   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Status}}
	I0927 16:57:07.670813   19184 addons.go:69] Setting metrics-server=true in profile "addons-393052"
	I0927 16:57:07.670857   19184 addons.go:234] Setting addon metrics-server=true in "addons-393052"
	I0927 16:57:07.670894   19184 host.go:66] Checking if "addons-393052" exists ...
	I0927 16:57:07.671025   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Status}}
	I0927 16:57:07.671430   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Status}}
	I0927 16:57:07.671493   19184 addons.go:69] Setting registry=true in profile "addons-393052"
	I0927 16:57:07.671517   19184 addons.go:234] Setting addon registry=true in "addons-393052"
	I0927 16:57:07.671553   19184 host.go:66] Checking if "addons-393052" exists ...
	I0927 16:57:07.671869   19184 addons.go:69] Setting storage-provisioner=true in profile "addons-393052"
	I0927 16:57:07.671895   19184 addons.go:234] Setting addon storage-provisioner=true in "addons-393052"
	I0927 16:57:07.671920   19184 host.go:66] Checking if "addons-393052" exists ...
	I0927 16:57:07.672214   19184 addons.go:69] Setting cloud-spanner=true in profile "addons-393052"
	I0927 16:57:07.672238   19184 addons.go:234] Setting addon cloud-spanner=true in "addons-393052"
	I0927 16:57:07.672270   19184 host.go:66] Checking if "addons-393052" exists ...
	I0927 16:57:07.672717   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Status}}
	I0927 16:57:07.673139   19184 out.go:177] * Verifying Kubernetes components...
	I0927 16:57:07.673347   19184 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-393052"
	I0927 16:57:07.673407   19184 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-393052"
	I0927 16:57:07.673444   19184 host.go:66] Checking if "addons-393052" exists ...
	I0927 16:57:07.674105   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Status}}
	I0927 16:57:07.671433   19184 addons.go:234] Setting addon inspektor-gadget=true in "addons-393052"
	I0927 16:57:07.676561   19184 host.go:66] Checking if "addons-393052" exists ...
	I0927 16:57:07.677186   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Status}}
	I0927 16:57:07.679923   19184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 16:57:07.684522   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Status}}
	I0927 16:57:07.685212   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Status}}
	I0927 16:57:07.701732   19184 host.go:66] Checking if "addons-393052" exists ...
	I0927 16:57:07.711070   19184 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-393052"
	I0927 16:57:07.711105   19184 host.go:66] Checking if "addons-393052" exists ...
	I0927 16:57:07.711388   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Status}}
	I0927 16:57:07.714433   19184 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0927 16:57:07.714975   19184 addons.go:234] Setting addon default-storageclass=true in "addons-393052"
	I0927 16:57:07.715016   19184 host.go:66] Checking if "addons-393052" exists ...
	I0927 16:57:07.715647   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Status}}
	I0927 16:57:07.715892   19184 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0927 16:57:07.721319   19184 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 16:57:07.721344   19184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0927 16:57:07.721395   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:57:07.721576   19184 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0927 16:57:07.722041   19184 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0927 16:57:07.722061   19184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0927 16:57:07.722107   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:57:07.722718   19184 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0927 16:57:07.722833   19184 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0927 16:57:07.722848   19184 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0927 16:57:07.722905   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:57:07.725436   19184 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0927 16:57:07.726558   19184 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0927 16:57:07.727007   19184 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 16:57:07.727022   19184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0927 16:57:07.727068   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:57:07.729229   19184 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0927 16:57:07.730441   19184 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0927 16:57:07.732055   19184 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0927 16:57:07.732066   19184 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0927 16:57:07.734330   19184 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0927 16:57:07.734412   19184 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0927 16:57:07.736287   19184 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0927 16:57:07.736390   19184 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0927 16:57:07.738228   19184 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0927 16:57:07.739746   19184 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0927 16:57:07.739772   19184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0927 16:57:07.739843   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:57:07.743883   19184 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0927 16:57:07.743921   19184 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0927 16:57:07.743982   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:57:07.766135   19184 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0927 16:57:07.767544   19184 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 16:57:07.769261   19184 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 16:57:07.770966   19184 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 16:57:07.770988   19184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0927 16:57:07.771044   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:57:07.772014   19184 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0927 16:57:07.779663   19184 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 16:57:07.779806   19184 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0927 16:57:07.779821   19184 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0927 16:57:07.779901   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:57:07.783966   19184 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 16:57:07.783995   19184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 16:57:07.784052   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:57:07.784466   19184 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0927 16:57:07.784537   19184 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0927 16:57:07.786830   19184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa Username:docker}
	I0927 16:57:07.788524   19184 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0927 16:57:07.788545   19184 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0927 16:57:07.788604   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:57:07.788757   19184 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0927 16:57:07.789721   19184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa Username:docker}
	I0927 16:57:07.791520   19184 out.go:177]   - Using image docker.io/registry:2.8.3
	I0927 16:57:07.791530   19184 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 16:57:07.791562   19184 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 16:57:07.791623   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:57:07.791800   19184 out.go:177]   - Using image docker.io/busybox:stable
	I0927 16:57:07.793479   19184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa Username:docker}
	I0927 16:57:07.793544   19184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa Username:docker}
	I0927 16:57:07.795090   19184 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 16:57:07.795108   19184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0927 16:57:07.795160   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:57:07.798816   19184 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0927 16:57:07.800018   19184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa Username:docker}
	I0927 16:57:07.800319   19184 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0927 16:57:07.800334   19184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0927 16:57:07.800387   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:57:07.801573   19184 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 16:57:07.801589   19184 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 16:57:07.801636   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:57:07.819192   19184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa Username:docker}
	I0927 16:57:07.821702   19184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa Username:docker}
	I0927 16:57:07.830698   19184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa Username:docker}
	I0927 16:57:07.831016   19184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa Username:docker}
	I0927 16:57:07.831282   19184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa Username:docker}
	I0927 16:57:07.831799   19184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa Username:docker}
	I0927 16:57:07.841146   19184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa Username:docker}
	I0927 16:57:07.842916   19184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa Username:docker}
	I0927 16:57:07.849158   19184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa Username:docker}
	W0927 16:57:07.850394   19184 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0927 16:57:07.850420   19184 retry.go:31] will retry after 187.855138ms: ssh: handshake failed: EOF
	I0927 16:57:08.125846   19184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0927 16:57:08.134373   19184 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 16:57:08.134509   19184 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 16:57:08.237514   19184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 16:57:08.244517   19184 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0927 16:57:08.244548   19184 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0927 16:57:08.324928   19184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 16:57:08.333271   19184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0927 16:57:08.334848   19184 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0927 16:57:08.334879   19184 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0927 16:57:08.344492   19184 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0927 16:57:08.344524   19184 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0927 16:57:08.346170   19184 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 16:57:08.346208   19184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0927 16:57:08.443145   19184 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0927 16:57:08.443236   19184 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0927 16:57:08.526944   19184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 16:57:08.536455   19184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 16:57:08.626388   19184 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0927 16:57:08.626497   19184 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0927 16:57:08.629808   19184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 16:57:08.630205   19184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 16:57:08.636819   19184 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0927 16:57:08.636903   19184 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0927 16:57:08.739734   19184 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 16:57:08.739817   19184 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 16:57:08.749225   19184 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0927 16:57:08.749307   19184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0927 16:57:08.831910   19184 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0927 16:57:08.832013   19184 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0927 16:57:08.835372   19184 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0927 16:57:08.835459   19184 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0927 16:57:08.926036   19184 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0927 16:57:08.926121   19184 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0927 16:57:09.027420   19184 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0927 16:57:09.027455   19184 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0927 16:57:09.142204   19184 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 16:57:09.142235   19184 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 16:57:09.142669   19184 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0927 16:57:09.142741   19184 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0927 16:57:09.237200   19184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0927 16:57:09.245091   19184 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0927 16:57:09.245172   19184 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0927 16:57:09.433565   19184 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0927 16:57:09.433723   19184 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0927 16:57:09.526484   19184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 16:57:09.532002   19184 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0927 16:57:09.532084   19184 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0927 16:57:09.633673   19184 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0927 16:57:09.633756   19184 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0927 16:57:09.634596   19184 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0927 16:57:09.634654   19184 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0927 16:57:09.640813   19184 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0927 16:57:09.640836   19184 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0927 16:57:10.040258   19184 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 16:57:10.040289   19184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0927 16:57:10.134364   19184 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0927 16:57:10.134460   19184 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0927 16:57:10.330073   19184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.204179863s)
	I0927 16:57:10.330239   19184 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.195708368s)
	I0927 16:57:10.331010   19184 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.196597864s)
	I0927 16:57:10.331166   19184 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0927 16:57:10.341490   19184 node_ready.go:35] waiting up to 6m0s for node "addons-393052" to be "Ready" ...
	I0927 16:57:10.346147   19184 node_ready.go:49] node "addons-393052" has status "Ready":"True"
	I0927 16:57:10.346232   19184 node_ready.go:38] duration metric: took 4.65899ms for node "addons-393052" to be "Ready" ...
	I0927 16:57:10.346259   19184 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 16:57:10.426183   19184 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0927 16:57:10.426277   19184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0927 16:57:10.438447   19184 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2pllc" in "kube-system" namespace to be "Ready" ...
	I0927 16:57:10.531692   19184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 16:57:10.533373   19184 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0927 16:57:10.533440   19184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0927 16:57:10.639151   19184 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I0927 16:57:10.639229   19184 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I0927 16:57:10.845281   19184 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-393052" context rescaled to 1 replicas
	I0927 16:57:11.128376   19184 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0927 16:57:11.128402   19184 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0927 16:57:11.136837   19184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0927 16:57:11.337326   19184 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0927 16:57:11.337358   19184 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0927 16:57:11.544503   19184 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0927 16:57:11.544529   19184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0927 16:57:11.849805   19184 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 16:57:11.849871   19184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I0927 16:57:11.930260   19184 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0927 16:57:11.930285   19184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0927 16:57:12.340820   19184 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 16:57:12.340851   19184 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0927 16:57:12.440122   19184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 16:57:12.444407   19184 pod_ready.go:103] pod "coredns-7c65d6cfc9-2pllc" in "kube-system" namespace has status "Ready":"False"
	I0927 16:57:12.731850   19184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 16:57:13.445424   19184 pod_ready.go:93] pod "coredns-7c65d6cfc9-2pllc" in "kube-system" namespace has status "Ready":"True"
	I0927 16:57:13.445453   19184 pod_ready.go:82] duration metric: took 3.006909859s for pod "coredns-7c65d6cfc9-2pllc" in "kube-system" namespace to be "Ready" ...
	I0927 16:57:13.445468   19184 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-smbjf" in "kube-system" namespace to be "Ready" ...
	I0927 16:57:13.533127   19184 pod_ready.go:93] pod "coredns-7c65d6cfc9-smbjf" in "kube-system" namespace has status "Ready":"True"
	I0927 16:57:13.533214   19184 pod_ready.go:82] duration metric: took 87.717526ms for pod "coredns-7c65d6cfc9-smbjf" in "kube-system" namespace to be "Ready" ...
	I0927 16:57:13.533241   19184 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-393052" in "kube-system" namespace to be "Ready" ...
	I0927 16:57:13.545924   19184 pod_ready.go:93] pod "etcd-addons-393052" in "kube-system" namespace has status "Ready":"True"
	I0927 16:57:13.546003   19184 pod_ready.go:82] duration metric: took 12.743144ms for pod "etcd-addons-393052" in "kube-system" namespace to be "Ready" ...
	I0927 16:57:13.546027   19184 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-393052" in "kube-system" namespace to be "Ready" ...
	I0927 16:57:14.733183   19184 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0927 16:57:14.733259   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:57:14.759551   19184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa Username:docker}
	I0927 16:57:15.526596   19184 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0927 16:57:15.627547   19184 pod_ready.go:103] pod "kube-apiserver-addons-393052" in "kube-system" namespace has status "Ready":"False"
	I0927 16:57:15.834148   19184 addons.go:234] Setting addon gcp-auth=true in "addons-393052"
	I0927 16:57:15.834272   19184 host.go:66] Checking if "addons-393052" exists ...
	I0927 16:57:15.834995   19184 cli_runner.go:164] Run: docker container inspect addons-393052 --format={{.State.Status}}
	I0927 16:57:15.857978   19184 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0927 16:57:15.858025   19184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-393052
	I0927 16:57:15.875220   19184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/addons-393052/id_rsa Username:docker}
	I0927 16:57:16.631062   19184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.393504607s)
	I0927 16:57:16.631300   19184 addons.go:475] Verifying addon ingress=true in "addons-393052"
	I0927 16:57:16.631259   19184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.306234232s)
	I0927 16:57:16.632993   19184 out.go:177] * Verifying ingress addon...
	I0927 16:57:16.634911   19184 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0927 16:57:16.642360   19184 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0927 16:57:16.642388   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:17.052692   19184 pod_ready.go:93] pod "kube-apiserver-addons-393052" in "kube-system" namespace has status "Ready":"True"
	I0927 16:57:17.052719   19184 pod_ready.go:82] duration metric: took 3.506530427s for pod "kube-apiserver-addons-393052" in "kube-system" namespace to be "Ready" ...
	I0927 16:57:17.052732   19184 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-393052" in "kube-system" namespace to be "Ready" ...
	I0927 16:57:17.135135   19184 pod_ready.go:93] pod "kube-controller-manager-addons-393052" in "kube-system" namespace has status "Ready":"True"
	I0927 16:57:17.135164   19184 pod_ready.go:82] duration metric: took 82.422027ms for pod "kube-controller-manager-addons-393052" in "kube-system" namespace to be "Ready" ...
	I0927 16:57:17.135178   19184 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fs9gn" in "kube-system" namespace to be "Ready" ...
	I0927 16:57:17.142029   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:17.143058   19184 pod_ready.go:93] pod "kube-proxy-fs9gn" in "kube-system" namespace has status "Ready":"True"
	I0927 16:57:17.143083   19184 pod_ready.go:82] duration metric: took 7.896613ms for pod "kube-proxy-fs9gn" in "kube-system" namespace to be "Ready" ...
	I0927 16:57:17.143096   19184 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-393052" in "kube-system" namespace to be "Ready" ...
	I0927 16:57:17.232231   19184 pod_ready.go:93] pod "kube-scheduler-addons-393052" in "kube-system" namespace has status "Ready":"True"
	I0927 16:57:17.232259   19184 pod_ready.go:82] duration metric: took 89.154295ms for pod "kube-scheduler-addons-393052" in "kube-system" namespace to be "Ready" ...
	I0927 16:57:17.232268   19184 pod_ready.go:39] duration metric: took 6.885983274s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 16:57:17.232291   19184 api_server.go:52] waiting for apiserver process to appear ...
	I0927 16:57:17.232346   19184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 16:57:17.639532   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:18.139683   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:18.640990   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:19.140110   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:19.735723   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:19.738222   19184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.404912382s)
	I0927 16:57:19.738385   19184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.211355169s)
	I0927 16:57:19.738683   19184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (11.202133176s)
	I0927 16:57:19.738743   19184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.108858128s)
	I0927 16:57:19.738923   19184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.108657841s)
	I0927 16:57:19.739021   19184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.501730311s)
	I0927 16:57:19.739066   19184 addons.go:475] Verifying addon registry=true in "addons-393052"
	I0927 16:57:19.739302   19184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.212717311s)
	I0927 16:57:19.739472   19184 addons.go:475] Verifying addon metrics-server=true in "addons-393052"
	I0927 16:57:19.739556   19184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.207746189s)
	W0927 16:57:19.739590   19184 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 16:57:19.739605   19184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.602734227s)
	I0927 16:57:19.739611   19184 retry.go:31] will retry after 253.531784ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 16:57:19.739717   19184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.299505286s)
	I0927 16:57:19.741712   19184 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-393052 service yakd-dashboard -n yakd-dashboard
	
	I0927 16:57:19.741845   19184 out.go:177] * Verifying registry addon...
	I0927 16:57:19.745080   19184 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0927 16:57:19.829020   19184 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0927 16:57:19.829046   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0927 16:57:19.829649   19184 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0927 16:57:19.993664   19184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 16:57:20.139755   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:20.249306   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:20.639905   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:20.748822   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:21.027958   19184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.29605502s)
	I0927 16:57:21.028004   19184 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-393052"
	I0927 16:57:21.028080   19184 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.170074792s)
	I0927 16:57:21.028227   19184 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.795862172s)
	I0927 16:57:21.028271   19184 api_server.go:72] duration metric: took 13.359270427s to wait for apiserver process to appear ...
	I0927 16:57:21.028283   19184 api_server.go:88] waiting for apiserver healthz status ...
	I0927 16:57:21.028306   19184 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0927 16:57:21.029742   19184 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 16:57:21.029859   19184 out.go:177] * Verifying csi-hostpath-driver addon...
	I0927 16:57:21.031201   19184 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0927 16:57:21.032133   19184 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0927 16:57:21.032895   19184 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0927 16:57:21.033138   19184 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0927 16:57:21.033155   19184 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0927 16:57:21.033767   19184 api_server.go:141] control plane version: v1.31.1
	I0927 16:57:21.033793   19184 api_server.go:131] duration metric: took 5.502498ms to wait for apiserver health ...
	I0927 16:57:21.033803   19184 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 16:57:21.038097   19184 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0927 16:57:21.038122   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:21.044592   19184 system_pods.go:59] 18 kube-system pods found
	I0927 16:57:21.044633   19184 system_pods.go:61] "coredns-7c65d6cfc9-2pllc" [31a38dc3-c69a-4166-ba23-cd1906b032ba] Running
	I0927 16:57:21.044643   19184 system_pods.go:61] "coredns-7c65d6cfc9-smbjf" [c8cb0003-29ec-470e-b417-f09950906702] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I0927 16:57:21.044655   19184 system_pods.go:61] "csi-hostpath-attacher-0" [f7b8c38e-d1ca-45f9-9809-be45e7a843a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0927 16:57:21.044660   19184 system_pods.go:61] "csi-hostpath-resizer-0" [f83c3d29-d2c3-41d6-941c-271e02cb975f] Pending
	I0927 16:57:21.044670   19184 system_pods.go:61] "csi-hostpathplugin-m9qj5" [d61fcb0c-3bd8-4d8f-bedf-829272da6773] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0927 16:57:21.044676   19184 system_pods.go:61] "etcd-addons-393052" [898d5ec8-06fa-496c-9ffd-33acc773e9d0] Running
	I0927 16:57:21.044682   19184 system_pods.go:61] "kube-apiserver-addons-393052" [cb3a0080-b936-4c84-a15e-b6bcd8e2b7a3] Running
	I0927 16:57:21.044688   19184 system_pods.go:61] "kube-controller-manager-addons-393052" [679e582b-b940-400c-9018-35cc52ee1cae] Running
	I0927 16:57:21.044703   19184 system_pods.go:61] "kube-ingress-dns-minikube" [b45e64cb-7255-49b7-9a95-805c787f8bdd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0927 16:57:21.044709   19184 system_pods.go:61] "kube-proxy-fs9gn" [495174d9-13f1-40a9-9ec8-47e309f38d80] Running
	I0927 16:57:21.044716   19184 system_pods.go:61] "kube-scheduler-addons-393052" [869db9a7-44ce-483d-a094-106b96b1e0de] Running
	I0927 16:57:21.044724   19184 system_pods.go:61] "metrics-server-84c5f94fbc-vccps" [1e0b8e4f-fed1-4898-8e4f-a4225ff47189] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 16:57:21.044739   19184 system_pods.go:61] "nvidia-device-plugin-daemonset-f5t54" [b23a243e-32fe-4cd7-979d-e8f6bd767c6c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0927 16:57:21.044748   19184 system_pods.go:61] "registry-66c9cd494c-zwv8v" [a2abbccc-9f95-4a37-8198-40d424cdcb00] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0927 16:57:21.044757   19184 system_pods.go:61] "registry-proxy-cf9vr" [c6fffe66-001f-45ee-9860-645249413bc6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0927 16:57:21.044769   19184 system_pods.go:61] "snapshot-controller-56fcc65765-q787g" [9b6290d7-08a7-4a75-bff3-81884fb4ded7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 16:57:21.044778   19184 system_pods.go:61] "snapshot-controller-56fcc65765-zkcfw" [6766b482-0155-4a4d-ac01-c6355ab29672] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 16:57:21.044788   19184 system_pods.go:61] "storage-provisioner" [a16d1409-216e-4eb8-b8d6-01cdbf3369f1] Running
	I0927 16:57:21.044798   19184 system_pods.go:74] duration metric: took 10.987068ms to wait for pod list to return data ...
	I0927 16:57:21.044811   19184 default_sa.go:34] waiting for default service account to be created ...
	I0927 16:57:21.051482   19184 default_sa.go:45] found service account: "default"
	I0927 16:57:21.051526   19184 default_sa.go:55] duration metric: took 6.689124ms for default service account to be created ...
	I0927 16:57:21.051564   19184 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 16:57:21.131533   19184 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0927 16:57:21.131557   19184 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0927 16:57:21.134035   19184 system_pods.go:86] 18 kube-system pods found
	I0927 16:57:21.134072   19184 system_pods.go:89] "coredns-7c65d6cfc9-2pllc" [31a38dc3-c69a-4166-ba23-cd1906b032ba] Running
	I0927 16:57:21.134085   19184 system_pods.go:89] "coredns-7c65d6cfc9-smbjf" [c8cb0003-29ec-470e-b417-f09950906702] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I0927 16:57:21.134098   19184 system_pods.go:89] "csi-hostpath-attacher-0" [f7b8c38e-d1ca-45f9-9809-be45e7a843a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0927 16:57:21.134105   19184 system_pods.go:89] "csi-hostpath-resizer-0" [f83c3d29-d2c3-41d6-941c-271e02cb975f] Pending
	I0927 16:57:21.134119   19184 system_pods.go:89] "csi-hostpathplugin-m9qj5" [d61fcb0c-3bd8-4d8f-bedf-829272da6773] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0927 16:57:21.134124   19184 system_pods.go:89] "etcd-addons-393052" [898d5ec8-06fa-496c-9ffd-33acc773e9d0] Running
	I0927 16:57:21.134131   19184 system_pods.go:89] "kube-apiserver-addons-393052" [cb3a0080-b936-4c84-a15e-b6bcd8e2b7a3] Running
	I0927 16:57:21.134137   19184 system_pods.go:89] "kube-controller-manager-addons-393052" [679e582b-b940-400c-9018-35cc52ee1cae] Running
	I0927 16:57:21.134151   19184 system_pods.go:89] "kube-ingress-dns-minikube" [b45e64cb-7255-49b7-9a95-805c787f8bdd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0927 16:57:21.134156   19184 system_pods.go:89] "kube-proxy-fs9gn" [495174d9-13f1-40a9-9ec8-47e309f38d80] Running
	I0927 16:57:21.134164   19184 system_pods.go:89] "kube-scheduler-addons-393052" [869db9a7-44ce-483d-a094-106b96b1e0de] Running
	I0927 16:57:21.134172   19184 system_pods.go:89] "metrics-server-84c5f94fbc-vccps" [1e0b8e4f-fed1-4898-8e4f-a4225ff47189] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 16:57:21.134183   19184 system_pods.go:89] "nvidia-device-plugin-daemonset-f5t54" [b23a243e-32fe-4cd7-979d-e8f6bd767c6c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0927 16:57:21.134192   19184 system_pods.go:89] "registry-66c9cd494c-zwv8v" [a2abbccc-9f95-4a37-8198-40d424cdcb00] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0927 16:57:21.134208   19184 system_pods.go:89] "registry-proxy-cf9vr" [c6fffe66-001f-45ee-9860-645249413bc6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0927 16:57:21.134216   19184 system_pods.go:89] "snapshot-controller-56fcc65765-q787g" [9b6290d7-08a7-4a75-bff3-81884fb4ded7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 16:57:21.134228   19184 system_pods.go:89] "snapshot-controller-56fcc65765-zkcfw" [6766b482-0155-4a4d-ac01-c6355ab29672] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 16:57:21.134234   19184 system_pods.go:89] "storage-provisioner" [a16d1409-216e-4eb8-b8d6-01cdbf3369f1] Running
	I0927 16:57:21.134244   19184 system_pods.go:126] duration metric: took 82.668631ms to wait for k8s-apps to be running ...
	I0927 16:57:21.134257   19184 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 16:57:21.134307   19184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 16:57:21.141614   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:21.154630   19184 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 16:57:21.154713   19184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0927 16:57:21.240929   19184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 16:57:21.250557   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:21.537580   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:21.640492   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:21.749503   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:22.037921   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:22.141114   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:22.239862   19184 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.105505696s)
	I0927 16:57:22.239892   19184 system_svc.go:56] duration metric: took 1.105633608s WaitForService to wait for kubelet
	I0927 16:57:22.239906   19184 kubeadm.go:582] duration metric: took 14.570922098s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 16:57:22.239922   19184 node_conditions.go:102] verifying NodePressure condition ...
	I0927 16:57:22.240088   19184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.246075464s)
	I0927 16:57:22.242956   19184 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0927 16:57:22.242992   19184 node_conditions.go:123] node cpu capacity is 8
	I0927 16:57:22.243007   19184 node_conditions.go:105] duration metric: took 3.071939ms to run NodePressure ...
	I0927 16:57:22.243033   19184 start.go:241] waiting for startup goroutines ...
	I0927 16:57:22.249205   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:22.536536   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:22.553912   19184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.312888237s)
	I0927 16:57:22.555408   19184 addons.go:475] Verifying addon gcp-auth=true in "addons-393052"
	I0927 16:57:22.556925   19184 out.go:177] * Verifying gcp-auth addon...
	I0927 16:57:22.559005   19184 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0927 16:57:22.635977   19184 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 16:57:22.640613   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:22.749169   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:23.037495   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:23.138609   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:23.249246   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:23.536803   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:23.651613   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:23.752231   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:24.036405   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:24.139366   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:24.248551   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:24.538736   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:24.638935   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:24.748291   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:25.036755   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:25.139149   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:25.248564   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:25.536312   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:25.638441   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:25.748464   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:26.035760   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:26.139089   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:26.248081   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:26.537158   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:26.639593   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:26.748540   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:27.036576   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:27.139402   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:27.248781   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:27.536716   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:27.640020   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:27.749203   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:28.036912   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:28.138484   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:28.248932   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:28.536778   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:28.639631   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:28.749073   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:29.036789   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:29.139577   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:29.249315   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:29.536814   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:29.664366   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:29.748620   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:30.036420   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:30.141899   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:30.248671   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:30.536224   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:30.638817   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:30.748067   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:31.036697   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:31.138260   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:31.250380   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:31.537592   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:31.639103   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:31.748464   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:32.036618   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:32.164115   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:32.248669   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:32.536098   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:32.638382   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:32.748908   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:33.037313   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:33.139133   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:33.248325   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:33.536980   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:33.638532   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:33.748989   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:34.036799   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:34.139960   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:34.249090   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:34.536940   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:34.639200   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:34.749361   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:35.036752   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:35.139802   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:35.249145   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:35.536979   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:35.639105   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:35.749403   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:36.037261   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:36.139748   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:36.248978   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:36.536743   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:36.639530   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:36.750099   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:37.037060   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:37.139471   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:37.248633   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:37.537737   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:37.667994   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:37.748720   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:38.035963   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:38.139475   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:38.248743   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:38.536465   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:38.638839   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:38.749362   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:39.036090   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:39.138904   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:39.248210   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:39.536340   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:39.639165   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:39.748774   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:40.036690   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:40.139811   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:40.249636   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:40.538643   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:40.638889   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:40.748984   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:41.036338   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:41.139021   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:41.248255   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:41.536783   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:41.638357   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:41.748973   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:42.036819   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:42.141878   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:42.249448   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:42.536769   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:42.639241   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:42.748743   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:43.036171   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:43.138840   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:43.248271   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:43.536730   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:43.638223   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:43.748423   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:44.036830   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:44.138852   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:44.248063   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:44.536238   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:44.639011   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:44.748244   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:45.036178   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:45.139498   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:45.248926   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:45.536383   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:45.639484   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:45.748961   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:46.036213   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:46.139688   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:46.249215   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:46.536873   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:46.638655   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:46.749151   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:47.036727   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:47.139503   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:47.248940   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:47.536377   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:47.639149   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:47.748743   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:48.036383   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:48.138916   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:48.249870   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:48.536826   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:48.638873   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:48.749375   19184 kapi.go:107] duration metric: took 29.004293606s to wait for kubernetes.io/minikube-addons=registry ...
	I0927 16:57:49.037029   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:49.138564   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:49.536715   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:49.638164   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:50.036693   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:50.137837   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:50.536870   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:50.639096   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:51.036933   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:51.138749   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:51.536897   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:51.638951   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:52.036474   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:52.139365   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:52.537306   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:52.639117   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:53.036861   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:53.138627   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:53.536712   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:53.639289   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:54.037342   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:54.138278   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:54.537262   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:54.638734   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:55.036245   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:55.139389   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:55.537107   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:55.639174   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:56.036896   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:56.139207   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:56.536495   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:56.639982   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:57.035954   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:57.139063   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:57.541788   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:57.639138   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:58.129516   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:58.139094   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:58.537213   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:58.641815   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:59.037342   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:59.139368   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:59.536905   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:59.640340   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:00.035667   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:00.139118   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:00.536568   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:00.640124   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:01.036308   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:01.139068   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:01.537026   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:01.642721   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:02.037555   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:02.139429   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:02.537289   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:02.639398   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:03.087479   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:03.138800   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:03.536203   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:03.638978   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:04.036258   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:04.139087   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:04.537123   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:04.639412   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:05.036656   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:05.139620   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:05.536407   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:05.638904   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:06.036172   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:06.139062   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:06.536330   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:06.638939   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:07.036533   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:07.139587   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:07.536845   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:07.639378   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:08.037054   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:08.139151   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:08.537052   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:08.639154   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:09.036454   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:09.139477   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:09.537234   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:09.639346   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:10.036792   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:10.138796   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:10.536919   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:10.638664   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:11.036786   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:11.139480   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:11.549277   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:11.651150   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:12.037089   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:12.138591   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:12.537608   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:12.638616   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:13.037277   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:13.139078   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:13.536165   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:13.639473   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:14.036739   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:14.139484   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:14.560720   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:14.663016   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:15.036767   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:15.139710   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:15.537105   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:15.731086   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:16.035956   19184 kapi.go:107] duration metric: took 55.003821665s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0927 16:58:16.138390   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:16.638095   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:17.138354   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:17.638549   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:18.139015   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:18.640496   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:19.138989   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:19.639180   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:20.138897   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:20.638569   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:21.140194   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:21.639674   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:22.139520   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:22.639212   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:23.139492   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:23.639740   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:24.138902   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:24.638804   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:25.139069   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:25.639308   19184 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:26.233509   19184 kapi.go:107] duration metric: took 1m9.598597044s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0927 16:58:45.062018   19184 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 16:58:45.062039   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:45.562830   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:46.062650   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:46.562955   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:47.061956   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:47.561905   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:48.063676   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:48.563138   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:49.062484   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:49.562425   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:50.062649   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:50.562658   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:51.061640   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:51.562489   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:52.062562   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:52.562553   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:53.062837   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:53.563090   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:54.062225   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:54.562424   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:55.062489   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:55.562192   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:56.062042   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:56.562065   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:57.062673   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:57.562580   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:58.062570   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:58.562748   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:59.062778   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:59.562764   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:00.062838   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:00.562840   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:01.062609   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:01.562944   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:02.062582   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:02.565030   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:03.062301   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:03.561992   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:04.062251   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:04.562659   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:05.062844   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:05.562049   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:06.061970   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:06.561693   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:07.062873   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:07.561806   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:08.061925   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:08.562583   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:09.063106   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:09.561993   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:10.062207   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:10.562378   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:11.062670   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:11.562744   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:12.062573   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:12.562734   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:13.061739   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:13.562698   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:14.062962   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:14.561707   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:15.062648   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:15.562510   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:16.062455   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:16.562640   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:17.062781   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:17.562325   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:18.062302   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:18.561976   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:19.062998   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:19.561707   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:20.063015   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:20.562075   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:21.062272   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:21.562427   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:22.062267   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:22.562201   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:23.062832   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:23.563688   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:24.062357   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:24.562256   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:25.062353   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:25.562717   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:26.062490   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:26.563093   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:27.062105   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:27.563025   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:28.062464   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:28.562233   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:29.062439   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:29.562721   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:30.062868   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:30.562003   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:31.062974   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:31.562377   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:32.062466   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:32.562870   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:33.062513   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:33.562452   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:34.062982   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:34.561812   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:35.063107   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:35.562345   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:36.062006   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:36.561671   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:37.062875   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:37.561643   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:38.062046   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:38.562271   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:39.062606   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:39.562536   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:40.062828   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:40.562096   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:41.062296   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:41.562789   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:42.062883   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:42.562366   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:43.063129   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:43.562011   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:44.062298   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:44.562117   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:45.062286   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:45.562517   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:46.062406   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:46.562242   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:47.062210   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:47.562507   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:48.061815   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:48.562792   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:49.062971   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:49.561895   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:50.063496   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:50.562392   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:51.062774   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:51.563038   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:52.061777   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:52.563495   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:53.063047   19184 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:59:53.562515   19184 kapi.go:107] duration metric: took 2m31.003507979s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0927 16:59:53.564330   19184 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-393052 cluster.
	I0927 16:59:53.566077   19184 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0927 16:59:53.567934   19184 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0927 16:59:53.569672   19184 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, volcano, ingress-dns, nvidia-device-plugin, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0927 16:59:53.571225   19184 addons.go:510] duration metric: took 2m45.902218432s for enable addons: enabled=[cloud-spanner storage-provisioner volcano ingress-dns nvidia-device-plugin metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0927 16:59:53.571281   19184 start.go:246] waiting for cluster config update ...
	I0927 16:59:53.571310   19184 start.go:255] writing updated cluster config ...
	I0927 16:59:53.571581   19184 ssh_runner.go:195] Run: rm -f paused
	I0927 16:59:53.622141   19184 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 16:59:53.624029   19184 out.go:177] * Done! kubectl is now configured to use "addons-393052" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 27 17:09:22 addons-393052 cri-dockerd[1609]: time="2024-09-27T17:09:22Z" level=error msg="error getting RW layer size for container ID 'ce3e0961f92df05bd817d7bc81cb28fc7d4148e54a8d89ed866d641ebc2a4d74': Error response from daemon: No such container: ce3e0961f92df05bd817d7bc81cb28fc7d4148e54a8d89ed866d641ebc2a4d74"
	Sep 27 17:09:22 addons-393052 cri-dockerd[1609]: time="2024-09-27T17:09:22Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ce3e0961f92df05bd817d7bc81cb28fc7d4148e54a8d89ed866d641ebc2a4d74'"
	Sep 27 17:09:22 addons-393052 cri-dockerd[1609]: time="2024-09-27T17:09:22Z" level=error msg="error getting RW layer size for container ID '72e0c551e2a6e951c35f65e2629e823ff202ba559a871a110e2c0b19a5f4dda9': Error response from daemon: No such container: 72e0c551e2a6e951c35f65e2629e823ff202ba559a871a110e2c0b19a5f4dda9"
	Sep 27 17:09:22 addons-393052 cri-dockerd[1609]: time="2024-09-27T17:09:22Z" level=error msg="Set backoffDuration to : 1m0s for container ID '72e0c551e2a6e951c35f65e2629e823ff202ba559a871a110e2c0b19a5f4dda9'"
	Sep 27 17:09:22 addons-393052 cri-dockerd[1609]: time="2024-09-27T17:09:22Z" level=error msg="error getting RW layer size for container ID '519348d67970a3498daa3b59a817a4124cf4be0a30759d4e6372091653f8e7cf': Error response from daemon: No such container: 519348d67970a3498daa3b59a817a4124cf4be0a30759d4e6372091653f8e7cf"
	Sep 27 17:09:22 addons-393052 cri-dockerd[1609]: time="2024-09-27T17:09:22Z" level=error msg="Set backoffDuration to : 1m0s for container ID '519348d67970a3498daa3b59a817a4124cf4be0a30759d4e6372091653f8e7cf'"
	Sep 27 17:09:24 addons-393052 dockerd[1344]: time="2024-09-27T17:09:24.941066244Z" level=info msg="ignoring event" container=69ece29ed7af1f1975645ae848bb7525d30fdab6fca97adf88faf5b98046a500 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 17:09:24 addons-393052 dockerd[1344]: time="2024-09-27T17:09:24.941508520Z" level=info msg="ignoring event" container=93decf959e3b76a9c26627938cf4718c9552c13ac92e02afc19a0b4471be236e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 17:09:25 addons-393052 dockerd[1344]: time="2024-09-27T17:09:25.115674612Z" level=info msg="ignoring event" container=101e0d80e45eaa383021fb9975714ca097e2659afc25ba00e690a7651217ee1e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 17:09:25 addons-393052 dockerd[1344]: time="2024-09-27T17:09:25.158814580Z" level=info msg="ignoring event" container=29959fc87088ce57f59e397ad1a0be2d76e6698e74e320f493e6e8aef7fe0123 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 17:09:26 addons-393052 dockerd[1344]: time="2024-09-27T17:09:26.743578604Z" level=info msg="ignoring event" container=90da953139646e35d84d932709018f04273fca12e7ae0a57b6e957543225da87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 17:09:26 addons-393052 dockerd[1344]: time="2024-09-27T17:09:26.878652667Z" level=info msg="ignoring event" container=fb19a5fae819f3713f7652a1e8c8ed24122902196b12960078adddb46f7fde78 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 17:09:32 addons-393052 dockerd[1344]: time="2024-09-27T17:09:32.368839126Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=ec506c0f2092fd3f traceID=b9697c7c1f3cd8eae9eebc643358eeaf
	Sep 27 17:09:32 addons-393052 dockerd[1344]: time="2024-09-27T17:09:32.371135754Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=ec506c0f2092fd3f traceID=b9697c7c1f3cd8eae9eebc643358eeaf
	Sep 27 17:09:32 addons-393052 cri-dockerd[1609]: time="2024-09-27T17:09:32Z" level=error msg="error getting RW layer size for container ID '90da953139646e35d84d932709018f04273fca12e7ae0a57b6e957543225da87': Error response from daemon: No such container: 90da953139646e35d84d932709018f04273fca12e7ae0a57b6e957543225da87"
	Sep 27 17:09:32 addons-393052 cri-dockerd[1609]: time="2024-09-27T17:09:32Z" level=error msg="Set backoffDuration to : 1m0s for container ID '90da953139646e35d84d932709018f04273fca12e7ae0a57b6e957543225da87'"
	Sep 27 17:09:46 addons-393052 dockerd[1344]: time="2024-09-27T17:09:46.638715981Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=c6114842bb9eb3db024e6f85fa983e680a2bed97c4637ad13c35ad7c576bb71a spanID=6a7c6c8574830076 traceID=e988cc06e797bfa276daceba2b2de587
	Sep 27 17:09:46 addons-393052 dockerd[1344]: time="2024-09-27T17:09:46.660997957Z" level=info msg="ignoring event" container=c6114842bb9eb3db024e6f85fa983e680a2bed97c4637ad13c35ad7c576bb71a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 17:09:46 addons-393052 dockerd[1344]: time="2024-09-27T17:09:46.788627666Z" level=info msg="ignoring event" container=598b3917890204cec64e1aa77f3408d69fd0708c8b2546b73eea61995d7ca39f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 17:09:49 addons-393052 dockerd[1344]: time="2024-09-27T17:09:49.055950193Z" level=info msg="ignoring event" container=91d2a8dba98c3df6cc4712f2bef8029f8c83847db033390f5df303b85f4b4199 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 17:09:49 addons-393052 dockerd[1344]: time="2024-09-27T17:09:49.136491449Z" level=info msg="ignoring event" container=342b2733fd26d8707f29c513fdf40b744a0f1ae1e7cf3ffd1ba7485a81e7d45b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 17:09:49 addons-393052 dockerd[1344]: time="2024-09-27T17:09:49.190489749Z" level=info msg="ignoring event" container=f45968b2d02f7fcd0da3eccd9d29fe947b9f5a14b0ae563b7d3bbc13322c7dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 17:09:49 addons-393052 cri-dockerd[1609]: time="2024-09-27T17:09:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-cf9vr_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 27 17:09:49 addons-393052 dockerd[1344]: time="2024-09-27T17:09:49.288000731Z" level=info msg="ignoring event" container=c4089cead8be201d3e2d774057c6600df66d2568d55f0d7fc1ca4a1dddfb134b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 17:09:49 addons-393052 dockerd[1344]: time="2024-09-27T17:09:49.403261359Z" level=info msg="ignoring event" container=3fda06aede0b46bf947277d57ccc98192f550ec54b931b83488c5f2f65b585cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f3538ff923f82       a416a98b71e22                                                                                                                33 seconds ago      Exited              helper-pod                0                   17b86ec4146e8       helper-pod-delete-pvc-cc992d79-9229-48dc-815e-b7a98bf6633a
	ee87bd227804c       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  35 seconds ago      Running             hello-world-app           0                   10edbd02a7330       hello-world-app-55bf9c44b4-x8wjr
	4a511954341a2       busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140                                              37 seconds ago      Exited              busybox                   0                   e39f26fbd6f4b       test-local-path
	b17a8608d4003       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                45 seconds ago      Running             nginx                     0                   43a65b5b32e5c       nginx
	47d53e37ec7bf       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   a8cfd940d0c8a       gcp-auth-89d5ffd79-qp4qs
	aa5e5ceb5c47e       ce263a8653f9c                                                                                                                11 minutes ago      Exited              patch                     1                   f26c7788c7a0e       ingress-nginx-admission-patch-vhjg5
	7002251e6f76b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   b54fb94c99e76       ingress-nginx-admission-create-bcm94
	342b2733fd26d       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy            0                   c4089cead8be2       registry-proxy-cf9vr
	783779b671a4c       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   994419f5d346e       storage-provisioner
	97c540f12f145       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                   0                   f8f3586a24f9c       coredns-7c65d6cfc9-2pllc
	2ff7b813ff0c7       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                0                   930ab502192ed       kube-proxy-fs9gn
	7c6cf29430b07       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   fe83f070b929a       kube-controller-manager-addons-393052
	6ca02db753a14       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler            0                   51a67bff8621c       kube-scheduler-addons-393052
	32e524c80312c       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   f95ac3d87654c       etcd-addons-393052
	8f4723a6a0289       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver            0                   fba0501039e47       kube-apiserver-addons-393052
	
	
	==> coredns [97c540f12f14] <==
	[INFO] 10.244.0.6:37957 - 47983 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000084231s
	[INFO] 10.244.0.6:55787 - 55796 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000103443s
	[INFO] 10.244.0.6:55787 - 49865 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000111324s
	[INFO] 10.244.0.6:42194 - 16721 "A IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.00557203s
	[INFO] 10.244.0.6:42194 - 59477 "AAAA IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.015169552s
	[INFO] 10.244.0.6:58110 - 31067 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.00578708s
	[INFO] 10.244.0.6:58110 - 16222 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005821147s
	[INFO] 10.244.0.6:54101 - 8224 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004553998s
	[INFO] 10.244.0.6:54101 - 34085 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.007810302s
	[INFO] 10.244.0.6:55758 - 60775 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000081597s
	[INFO] 10.244.0.6:55758 - 64098 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000125633s
	[INFO] 10.244.0.25:60826 - 40330 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000284383s
	[INFO] 10.244.0.25:43903 - 53131 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000373676s
	[INFO] 10.244.0.25:44982 - 29691 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000137853s
	[INFO] 10.244.0.25:42193 - 23660 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127464s
	[INFO] 10.244.0.25:59553 - 13661 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000148985s
	[INFO] 10.244.0.25:40546 - 22460 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000185457s
	[INFO] 10.244.0.25:45269 - 51940 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007819436s
	[INFO] 10.244.0.25:41351 - 17596 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.019590362s
	[INFO] 10.244.0.25:56449 - 40231 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007552049s
	[INFO] 10.244.0.25:41646 - 5879 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.010362641s
	[INFO] 10.244.0.25:52636 - 16386 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005956758s
	[INFO] 10.244.0.25:53653 - 10420 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005993302s
	[INFO] 10.244.0.25:39398 - 49603 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 140 0.001874088s
	[INFO] 10.244.0.25:43953 - 58843 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 116 0.002876604s
	
	
	==> describe nodes <==
	Name:               addons-393052
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-393052
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=addons-393052
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T16_57_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-393052
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 16:56:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-393052
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:09:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 17:09:37 +0000   Fri, 27 Sep 2024 16:56:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 17:09:37 +0000   Fri, 27 Sep 2024 16:56:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 17:09:37 +0000   Fri, 27 Sep 2024 16:56:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 17:09:37 +0000   Fri, 27 Sep 2024 16:56:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-393052
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859300Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859300Ki
	  pods:               110
	System Info:
	  Machine ID:                 893d65f92a5545f0a1879ad7d6c0050e
	  System UUID:                d51b6008-d8a7-4e90-ae57-414d55d89d7f
	  Boot ID:                    d796e8e3-3631-4921-b1b3-1be59ed34d92
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     hello-world-app-55bf9c44b4-x8wjr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  default                     registry-test                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  gcp-auth                    gcp-auth-89d5ffd79-qp4qs                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-2pllc                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-393052                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-393052             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-393052    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-fs9gn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-393052             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-393052 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x7 over 12m)  kubelet          Node addons-393052 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-393052 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-393052 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-393052 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-393052 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-393052 event: Registered Node addons-393052 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 02 58 eb 32 6b 08 06
	[  +2.148428] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 5e fe b3 41 46 08 06
	[  +2.534558] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 62 e2 05 4d 6c 08 06
	[  +5.206146] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 1e 0f a5 cb 45 08 06
	[  +0.404038] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 70 bf 89 ef b2 08 06
	[  +0.140512] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 4e 7d a1 b0 ee 08 06
	[ +12.054665] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9e c4 9c 55 09 9c 08 06
	[  +1.067379] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 7d b5 62 1c ab 08 06
	[Sep27 16:59] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 78 a0 91 2b 49 08 06
	[  +0.056258] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fe 9d ae 94 83 a7 08 06
	[ +26.945604] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e e1 e3 f2 1f 66 08 06
	[  +0.000405] IPv4: martian source 10.244.0.25 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 96 49 03 22 01 91 08 06
	[Sep27 17:09] IPv4: martian source 10.244.0.32 from 10.244.0.21, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9e c4 9c 55 09 9c 08 06
	
	
	==> etcd [32e524c80312] <==
	{"level":"info","ts":"2024-09-27T16:56:58.537516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-27T16:56:58.537539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-27T16:56:58.537562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-27T16:56:58.537575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-27T16:56:58.537588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-27T16:56:58.537601Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-27T16:56:58.538672Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-393052 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T16:56:58.538704Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T16:56:58.538721Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T16:56:58.538866Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T16:56:58.538890Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T16:56:58.538908Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T16:56:58.539580Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T16:56:58.539742Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T16:56:58.539778Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T16:56:58.539869Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T16:56:58.539869Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T16:56:58.540938Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T16:56:58.540998Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-27T16:57:10.733111Z","caller":"traceutil/trace.go:171","msg":"trace[1242585742] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"100.685066ms","start":"2024-09-27T16:57:10.632412Z","end":"2024-09-27T16:57:10.733097Z","steps":["trace[1242585742] 'compare'  (duration: 86.586576ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T16:57:10.733371Z","caller":"traceutil/trace.go:171","msg":"trace[345707985] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"100.79666ms","start":"2024-09-27T16:57:10.632562Z","end":"2024-09-27T16:57:10.733359Z","steps":["trace[345707985] 'process raft request'  (duration: 100.251005ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T16:57:23.649091Z","caller":"traceutil/trace.go:171","msg":"trace[1863905615] transaction","detail":"{read_only:false; response_revision:974; number_of_response:1; }","duration":"112.056281ms","start":"2024-09-27T16:57:23.537014Z","end":"2024-09-27T16:57:23.649070Z","steps":["trace[1863905615] 'process raft request'  (duration: 111.946672ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T17:06:58.559219Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1845}
	{"level":"info","ts":"2024-09-27T17:06:58.583867Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1845,"took":"24.084832ms","hash":1144207730,"current-db-size-bytes":8867840,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4755456,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2024-09-27T17:06:58.583916Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1144207730,"revision":1845,"compact-revision":-1}
	
	
	==> gcp-auth [47d53e37ec7b] <==
	2024/09/27 17:00:36 Ready to write response ...
	2024/09/27 17:00:36 Ready to marshal response ...
	2024/09/27 17:00:36 Ready to write response ...
	2024/09/27 17:08:38 Ready to marshal response ...
	2024/09/27 17:08:38 Ready to write response ...
	2024/09/27 17:08:39 Ready to marshal response ...
	2024/09/27 17:08:39 Ready to write response ...
	2024/09/27 17:08:39 Ready to marshal response ...
	2024/09/27 17:08:39 Ready to write response ...
	2024/09/27 17:08:39 Ready to marshal response ...
	2024/09/27 17:08:39 Ready to write response ...
	2024/09/27 17:08:48 Ready to marshal response ...
	2024/09/27 17:08:48 Ready to write response ...
	2024/09/27 17:09:00 Ready to marshal response ...
	2024/09/27 17:09:00 Ready to write response ...
	2024/09/27 17:09:05 Ready to marshal response ...
	2024/09/27 17:09:05 Ready to write response ...
	2024/09/27 17:09:05 Ready to marshal response ...
	2024/09/27 17:09:05 Ready to write response ...
	2024/09/27 17:09:09 Ready to marshal response ...
	2024/09/27 17:09:09 Ready to write response ...
	2024/09/27 17:09:11 Ready to marshal response ...
	2024/09/27 17:09:11 Ready to write response ...
	2024/09/27 17:09:15 Ready to marshal response ...
	2024/09/27 17:09:15 Ready to write response ...
	
	
	==> kernel <==
	 17:09:50 up 52 min,  0 users,  load average: 1.12, 0.62, 0.38
	Linux addons-393052 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [8f4723a6a028] <==
	W0927 17:00:27.357907       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0927 17:00:27.744450       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0927 17:00:28.033688       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0927 17:08:39.006966       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.233.170"}
	I0927 17:08:51.464064       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0927 17:08:51.729976       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0927 17:09:00.095389       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0927 17:09:00.348897       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0927 17:09:00.533292       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.128.181"}
	W0927 17:09:01.135986       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0927 17:09:12.037429       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.88.205"}
	I0927 17:09:24.776406       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 17:09:24.776460       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 17:09:24.789012       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 17:09:24.789067       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 17:09:24.791799       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 17:09:24.791885       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 17:09:24.800771       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 17:09:24.800820       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 17:09:24.833416       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 17:09:24.833459       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0927 17:09:25.792374       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0927 17:09:25.834334       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0927 17:09:25.842159       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0927 17:09:32.007514       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [7c6cf29430b0] <==
	E0927 17:09:32.038994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 17:09:33.234279       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 17:09:33.234318       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 17:09:33.673893       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 17:09:33.673933       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 17:09:35.105999       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 17:09:35.106043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 17:09:36.019734       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 17:09:36.019776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 17:09:37.129085       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0927 17:09:37.129136       1 shared_informer.go:320] Caches are synced for resource quota
	I0927 17:09:37.324360       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0927 17:09:37.324398       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 17:09:37.973403       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-393052"
	W0927 17:09:39.972121       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 17:09:39.972161       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 17:09:42.652784       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 17:09:42.652825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 17:09:44.429084       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 17:09:44.429128       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 17:09:44.845352       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 17:09:44.845399       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 17:09:46.486199       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 17:09:46.486240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 17:09:49.019326       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.655µs"
	
	
	==> kube-proxy [2ff7b813ff0c] <==
	I0927 16:57:07.663693       1 server_linux.go:66] "Using iptables proxy"
	I0927 16:57:07.834500       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0927 16:57:07.834578       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 16:57:08.124120       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0927 16:57:08.124203       1 server_linux.go:169] "Using iptables Proxier"
	I0927 16:57:08.132649       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 16:57:08.133133       1 server.go:483] "Version info" version="v1.31.1"
	I0927 16:57:08.133166       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 16:57:08.134599       1 config.go:199] "Starting service config controller"
	I0927 16:57:08.136598       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 16:57:08.134829       1 config.go:105] "Starting endpoint slice config controller"
	I0927 16:57:08.136629       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 16:57:08.135486       1 config.go:328] "Starting node config controller"
	I0927 16:57:08.136640       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 16:57:08.237655       1 shared_informer.go:320] Caches are synced for node config
	I0927 16:57:08.237715       1 shared_informer.go:320] Caches are synced for service config
	I0927 16:57:08.237753       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6ca02db753a1] <==
	W0927 16:56:59.843660       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 16:56:59.844600       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 16:56:59.843744       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 16:56:59.844647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 16:56:59.843765       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 16:56:59.844719       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 16:56:59.844114       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 16:56:59.844744       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 16:56:59.844901       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 16:56:59.844924       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 16:57:00.652306       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 16:57:00.652348       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 16:57:00.708740       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 16:57:00.708789       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 16:57:00.780314       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 16:57:00.780354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 16:57:00.809747       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 16:57:00.809783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 16:57:00.815164       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 16:57:00.815203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 16:57:00.877170       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 16:57:00.877206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 16:57:00.912403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0927 16:57:00.912446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0927 16:57:01.438615       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 17:09:46 addons-393052 kubelet[2447]: I0927 17:09:46.953518    2447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5386c6b-5f15-44cb-b4e3-8d3201b1abf5-config-volume\") pod \"a5386c6b-5f15-44cb-b4e3-8d3201b1abf5\" (UID: \"a5386c6b-5f15-44cb-b4e3-8d3201b1abf5\") "
	Sep 27 17:09:46 addons-393052 kubelet[2447]: I0927 17:09:46.953572    2447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdcw9\" (UniqueName: \"kubernetes.io/projected/a5386c6b-5f15-44cb-b4e3-8d3201b1abf5-kube-api-access-qdcw9\") pod \"a5386c6b-5f15-44cb-b4e3-8d3201b1abf5\" (UID: \"a5386c6b-5f15-44cb-b4e3-8d3201b1abf5\") "
	Sep 27 17:09:46 addons-393052 kubelet[2447]: I0927 17:09:46.954024    2447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5386c6b-5f15-44cb-b4e3-8d3201b1abf5-config-volume" (OuterVolumeSpecName: "config-volume") pod "a5386c6b-5f15-44cb-b4e3-8d3201b1abf5" (UID: "a5386c6b-5f15-44cb-b4e3-8d3201b1abf5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 27 17:09:46 addons-393052 kubelet[2447]: I0927 17:09:46.955668    2447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5386c6b-5f15-44cb-b4e3-8d3201b1abf5-kube-api-access-qdcw9" (OuterVolumeSpecName: "kube-api-access-qdcw9") pod "a5386c6b-5f15-44cb-b4e3-8d3201b1abf5" (UID: "a5386c6b-5f15-44cb-b4e3-8d3201b1abf5"). InnerVolumeSpecName "kube-api-access-qdcw9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 17:09:47 addons-393052 kubelet[2447]: I0927 17:09:47.054109    2447 reconciler_common.go:288] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5386c6b-5f15-44cb-b4e3-8d3201b1abf5-config-volume\") on node \"addons-393052\" DevicePath \"\""
	Sep 27 17:09:47 addons-393052 kubelet[2447]: I0927 17:09:47.054147    2447 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qdcw9\" (UniqueName: \"kubernetes.io/projected/a5386c6b-5f15-44cb-b4e3-8d3201b1abf5-kube-api-access-qdcw9\") on node \"addons-393052\" DevicePath \"\""
	Sep 27 17:09:47 addons-393052 kubelet[2447]: I0927 17:09:47.158001    2447 scope.go:117] "RemoveContainer" containerID="c6114842bb9eb3db024e6f85fa983e680a2bed97c4637ad13c35ad7c576bb71a"
	Sep 27 17:09:47 addons-393052 kubelet[2447]: I0927 17:09:47.172057    2447 scope.go:117] "RemoveContainer" containerID="c6114842bb9eb3db024e6f85fa983e680a2bed97c4637ad13c35ad7c576bb71a"
	Sep 27 17:09:47 addons-393052 kubelet[2447]: E0927 17:09:47.172760    2447 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: c6114842bb9eb3db024e6f85fa983e680a2bed97c4637ad13c35ad7c576bb71a" containerID="c6114842bb9eb3db024e6f85fa983e680a2bed97c4637ad13c35ad7c576bb71a"
	Sep 27 17:09:47 addons-393052 kubelet[2447]: I0927 17:09:47.172804    2447 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c6114842bb9eb3db024e6f85fa983e680a2bed97c4637ad13c35ad7c576bb71a"} err="failed to get container status \"c6114842bb9eb3db024e6f85fa983e680a2bed97c4637ad13c35ad7c576bb71a\": rpc error: code = Unknown desc = Error response from daemon: No such container: c6114842bb9eb3db024e6f85fa983e680a2bed97c4637ad13c35ad7c576bb71a"
	Sep 27 17:09:48 addons-393052 kubelet[2447]: E0927 17:09:48.250350    2447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="a80c82cd-062b-4949-be24-1278338d230e"
	Sep 27 17:09:48 addons-393052 kubelet[2447]: I0927 17:09:48.257242    2447 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5386c6b-5f15-44cb-b4e3-8d3201b1abf5" path="/var/lib/kubelet/pods/a5386c6b-5f15-44cb-b4e3-8d3201b1abf5/volumes"
	Sep 27 17:09:49 addons-393052 kubelet[2447]: I0927 17:09:49.231128    2447 scope.go:117] "RemoveContainer" containerID="91d2a8dba98c3df6cc4712f2bef8029f8c83847db033390f5df303b85f4b4199"
	Sep 27 17:09:49 addons-393052 kubelet[2447]: I0927 17:09:49.367631    2447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n9hf\" (UniqueName: \"kubernetes.io/projected/a2abbccc-9f95-4a37-8198-40d424cdcb00-kube-api-access-7n9hf\") pod \"a2abbccc-9f95-4a37-8198-40d424cdcb00\" (UID: \"a2abbccc-9f95-4a37-8198-40d424cdcb00\") "
	Sep 27 17:09:49 addons-393052 kubelet[2447]: I0927 17:09:49.367706    2447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbkhp\" (UniqueName: \"kubernetes.io/projected/c6fffe66-001f-45ee-9860-645249413bc6-kube-api-access-nbkhp\") pod \"c6fffe66-001f-45ee-9860-645249413bc6\" (UID: \"c6fffe66-001f-45ee-9860-645249413bc6\") "
	Sep 27 17:09:49 addons-393052 kubelet[2447]: I0927 17:09:49.369806    2447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6fffe66-001f-45ee-9860-645249413bc6-kube-api-access-nbkhp" (OuterVolumeSpecName: "kube-api-access-nbkhp") pod "c6fffe66-001f-45ee-9860-645249413bc6" (UID: "c6fffe66-001f-45ee-9860-645249413bc6"). InnerVolumeSpecName "kube-api-access-nbkhp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 17:09:49 addons-393052 kubelet[2447]: I0927 17:09:49.369929    2447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2abbccc-9f95-4a37-8198-40d424cdcb00-kube-api-access-7n9hf" (OuterVolumeSpecName: "kube-api-access-7n9hf") pod "a2abbccc-9f95-4a37-8198-40d424cdcb00" (UID: "a2abbccc-9f95-4a37-8198-40d424cdcb00"). InnerVolumeSpecName "kube-api-access-7n9hf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 17:09:49 addons-393052 kubelet[2447]: I0927 17:09:49.468881    2447 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7n9hf\" (UniqueName: \"kubernetes.io/projected/a2abbccc-9f95-4a37-8198-40d424cdcb00-kube-api-access-7n9hf\") on node \"addons-393052\" DevicePath \"\""
	Sep 27 17:09:49 addons-393052 kubelet[2447]: I0927 17:09:49.468926    2447 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nbkhp\" (UniqueName: \"kubernetes.io/projected/c6fffe66-001f-45ee-9860-645249413bc6-kube-api-access-nbkhp\") on node \"addons-393052\" DevicePath \"\""
	Sep 27 17:09:49 addons-393052 kubelet[2447]: I0927 17:09:49.569112    2447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs75p\" (UniqueName: \"kubernetes.io/projected/a80c82cd-062b-4949-be24-1278338d230e-kube-api-access-xs75p\") pod \"a80c82cd-062b-4949-be24-1278338d230e\" (UID: \"a80c82cd-062b-4949-be24-1278338d230e\") "
	Sep 27 17:09:49 addons-393052 kubelet[2447]: I0927 17:09:49.569182    2447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a80c82cd-062b-4949-be24-1278338d230e-gcp-creds\") pod \"a80c82cd-062b-4949-be24-1278338d230e\" (UID: \"a80c82cd-062b-4949-be24-1278338d230e\") "
	Sep 27 17:09:49 addons-393052 kubelet[2447]: I0927 17:09:49.569260    2447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a80c82cd-062b-4949-be24-1278338d230e-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "a80c82cd-062b-4949-be24-1278338d230e" (UID: "a80c82cd-062b-4949-be24-1278338d230e"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 27 17:09:49 addons-393052 kubelet[2447]: I0927 17:09:49.571132    2447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a80c82cd-062b-4949-be24-1278338d230e-kube-api-access-xs75p" (OuterVolumeSpecName: "kube-api-access-xs75p") pod "a80c82cd-062b-4949-be24-1278338d230e" (UID: "a80c82cd-062b-4949-be24-1278338d230e"). InnerVolumeSpecName "kube-api-access-xs75p". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 17:09:49 addons-393052 kubelet[2447]: I0927 17:09:49.669781    2447 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a80c82cd-062b-4949-be24-1278338d230e-gcp-creds\") on node \"addons-393052\" DevicePath \"\""
	Sep 27 17:09:49 addons-393052 kubelet[2447]: I0927 17:09:49.669820    2447 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xs75p\" (UniqueName: \"kubernetes.io/projected/a80c82cd-062b-4949-be24-1278338d230e-kube-api-access-xs75p\") on node \"addons-393052\" DevicePath \"\""
	
	
	==> storage-provisioner [783779b671a4] <==
	I0927 16:57:15.728067       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 16:57:15.738047       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 16:57:15.738120       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 16:57:15.824378       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 16:57:15.824586       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-393052_6654f9a2-2973-4da9-b0ff-91f9d5a80b29!
	I0927 16:57:15.825900       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"05f36fab-27f5-42f8-aa33-9498d870561f", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-393052_6654f9a2-2973-4da9-b0ff-91f9d5a80b29 became leader
	I0927 16:57:15.934899       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-393052_6654f9a2-2973-4da9-b0ff-91f9d5a80b29!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-393052 -n addons-393052
helpers_test.go:261: (dbg) Run:  kubectl --context addons-393052 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-393052 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-393052 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-393052/192.168.49.2
	Start Time:       Fri, 27 Sep 2024 17:00:36 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sdrx2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sdrx2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason          Age                     From               Message
	  ----     ------          ----                    ----               -------
	  Normal   Scheduled       9m14s                   default-scheduler  Successfully assigned default/busybox to addons-393052
	  Normal   SandboxChanged  9m13s                   kubelet            Pod sandbox changed, it will be killed and re-created.
	  Warning  Failed          7m54s (x6 over 9m13s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling         7m41s (x4 over 9m14s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed          7m41s (x4 over 9m14s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed          7m41s (x4 over 9m14s)   kubelet            Error: ErrImagePull
	  Normal   BackOff         4m13s (x21 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (72.45s)

                                                
                                    

Test pass (321/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 20.39
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 10.85
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.19
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 0.98
21 TestBinaryMirror 0.75
22 TestOffline 44.43
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 210.63
29 TestAddons/serial/Volcano 42.25
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 21.26
35 TestAddons/parallel/InspektorGadget 10.59
36 TestAddons/parallel/MetricsServer 5.7
38 TestAddons/parallel/CSI 46.69
39 TestAddons/parallel/Headlamp 16.35
40 TestAddons/parallel/CloudSpanner 5.4
41 TestAddons/parallel/LocalPath 53.87
42 TestAddons/parallel/NvidiaDevicePlugin 5.4
43 TestAddons/parallel/Yakd 10.64
44 TestAddons/StoppedEnableDisable 10.99
45 TestCertOptions 30.63
46 TestCertExpiration 234.51
47 TestDockerFlags 25.58
48 TestForceSystemdFlag 37.19
49 TestForceSystemdEnv 27.61
51 TestKVMDriverInstallOrUpdate 5.11
55 TestErrorSpam/setup 23.98
56 TestErrorSpam/start 0.54
57 TestErrorSpam/status 0.82
58 TestErrorSpam/pause 1.11
59 TestErrorSpam/unpause 1.36
60 TestErrorSpam/stop 10.86
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 32.56
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 40.21
67 TestFunctional/serial/KubeContext 0.05
68 TestFunctional/serial/KubectlGetPods 0.08
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.45
72 TestFunctional/serial/CacheCmd/cache/add_local 1.43
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.22
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 40.07
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 0.96
83 TestFunctional/serial/LogsFileCmd 1.01
84 TestFunctional/serial/InvalidService 4.67
86 TestFunctional/parallel/ConfigCmd 0.32
87 TestFunctional/parallel/DashboardCmd 15.82
88 TestFunctional/parallel/DryRun 0.48
89 TestFunctional/parallel/InternationalLanguage 0.17
90 TestFunctional/parallel/StatusCmd 1.04
94 TestFunctional/parallel/ServiceCmdConnect 7.55
95 TestFunctional/parallel/AddonsCmd 0.16
96 TestFunctional/parallel/PersistentVolumeClaim 43.21
98 TestFunctional/parallel/SSHCmd 0.64
99 TestFunctional/parallel/CpCmd 1.71
100 TestFunctional/parallel/MySQL 24.83
101 TestFunctional/parallel/FileSync 0.25
102 TestFunctional/parallel/CertSync 1.54
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.29
110 TestFunctional/parallel/License 0.72
111 TestFunctional/parallel/ServiceCmd/DeployApp 10.19
112 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
113 TestFunctional/parallel/ProfileCmd/profile_list 0.37
114 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
115 TestFunctional/parallel/MountCmd/any-port 7.76
116 TestFunctional/parallel/MountCmd/specific-port 1.52
117 TestFunctional/parallel/ServiceCmd/List 0.51
118 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
119 TestFunctional/parallel/MountCmd/VerifyCleanup 1.69
120 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
121 TestFunctional/parallel/ServiceCmd/Format 0.4
122 TestFunctional/parallel/ServiceCmd/URL 0.39
123 TestFunctional/parallel/Version/short 0.07
124 TestFunctional/parallel/Version/components 0.51
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
129 TestFunctional/parallel/ImageCommands/ImageBuild 4.56
130 TestFunctional/parallel/ImageCommands/Setup 1.92
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.1
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.81
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.78
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.41
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.23
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.57
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.34
143 TestFunctional/parallel/DockerEnv/bash 0.95
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 96.96
160 TestMultiControlPlane/serial/DeployApp 44.11
161 TestMultiControlPlane/serial/PingHostFromPods 1.04
162 TestMultiControlPlane/serial/AddWorkerNode 20.22
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.83
165 TestMultiControlPlane/serial/CopyFile 15.12
166 TestMultiControlPlane/serial/StopSecondaryNode 11.36
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.64
168 TestMultiControlPlane/serial/RestartSecondaryNode 37.76
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.8
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 243.16
171 TestMultiControlPlane/serial/DeleteSecondaryNode 9.33
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
173 TestMultiControlPlane/serial/StopCluster 32.36
174 TestMultiControlPlane/serial/RestartCluster 100.14
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.63
176 TestMultiControlPlane/serial/AddSecondaryNode 36.23
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.81
180 TestImageBuild/serial/Setup 23.67
181 TestImageBuild/serial/NormalBuild 2.62
182 TestImageBuild/serial/BuildWithBuildArg 0.95
183 TestImageBuild/serial/BuildWithDockerIgnore 0.86
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.76
188 TestJSONOutput/start/Command 65.26
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.51
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.41
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 5.72
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.2
213 TestKicCustomNetwork/create_custom_network 26.04
214 TestKicCustomNetwork/use_default_bridge_network 26.69
215 TestKicExistingNetwork 26.31
216 TestKicCustomSubnet 26.33
217 TestKicStaticIP 23.43
218 TestMainNoArgs 0.04
219 TestMinikubeProfile 48.47
222 TestMountStart/serial/StartWithMountFirst 10.34
223 TestMountStart/serial/VerifyMountFirst 0.23
224 TestMountStart/serial/StartWithMountSecond 7.12
225 TestMountStart/serial/VerifyMountSecond 0.23
226 TestMountStart/serial/DeleteFirst 1.45
227 TestMountStart/serial/VerifyMountPostDelete 0.23
228 TestMountStart/serial/Stop 1.17
229 TestMountStart/serial/RestartStopped 8.76
230 TestMountStart/serial/VerifyMountPostStop 0.24
233 TestMultiNode/serial/FreshStart2Nodes 76.47
234 TestMultiNode/serial/DeployApp2Nodes 37.51
235 TestMultiNode/serial/PingHostFrom2Pods 0.71
236 TestMultiNode/serial/AddNode 15.4
237 TestMultiNode/serial/MultiNodeLabels 0.08
238 TestMultiNode/serial/ProfileList 0.66
239 TestMultiNode/serial/CopyFile 8.67
240 TestMultiNode/serial/StopNode 2.05
241 TestMultiNode/serial/StartAfterStop 9.59
242 TestMultiNode/serial/RestartKeepsNodes 95.69
243 TestMultiNode/serial/DeleteNode 5.16
244 TestMultiNode/serial/StopMultiNode 21.41
245 TestMultiNode/serial/RestartMultiNode 53.96
246 TestMultiNode/serial/ValidateNameConflict 26.6
251 TestPreload 155.77
253 TestScheduledStopUnix 97.4
254 TestSkaffold 102.5
256 TestInsufficientStorage 12.6
257 TestRunningBinaryUpgrade 79.56
259 TestKubernetesUpgrade 343.55
260 TestMissingContainerUpgrade 162.16
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
263 TestNoKubernetes/serial/StartWithK8s 34.35
264 TestNoKubernetes/serial/StartWithStopK8s 16.83
265 TestStoppedBinaryUpgrade/Setup 2.49
266 TestStoppedBinaryUpgrade/Upgrade 147.99
267 TestNoKubernetes/serial/Start 7.54
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
269 TestNoKubernetes/serial/ProfileList 1.39
270 TestNoKubernetes/serial/Stop 1.19
271 TestNoKubernetes/serial/StartNoArgs 7.82
272 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
273 TestStoppedBinaryUpgrade/MinikubeLogs 1.7
293 TestPause/serial/Start 72.3
295 TestStartStop/group/old-k8s-version/serial/FirstStart 150.41
297 TestStartStop/group/no-preload/serial/FirstStart 44.67
298 TestPause/serial/SecondStartNoReconfiguration 33.84
299 TestStartStop/group/no-preload/serial/DeployApp 8.27
300 TestPause/serial/Pause 0.53
301 TestPause/serial/VerifyStatus 0.28
302 TestPause/serial/Unpause 0.45
303 TestPause/serial/PauseAgain 0.67
304 TestPause/serial/DeletePaused 2.07
305 TestPause/serial/VerifyDeletedResources 0.75
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.89
308 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 68.33
309 TestStartStop/group/no-preload/serial/Stop 10.85
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
311 TestStartStop/group/no-preload/serial/SecondStart 263.04
312 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.3
314 TestStartStop/group/newest-cni/serial/FirstStart 29.88
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.9
316 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.67
317 TestStartStop/group/old-k8s-version/serial/DeployApp 10.42
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
319 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 263.86
320 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.75
321 TestStartStop/group/old-k8s-version/serial/Stop 11.12
322 TestStartStop/group/newest-cni/serial/DeployApp 0
323 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.1
324 TestStartStop/group/newest-cni/serial/Stop 5.77
325 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
326 TestStartStop/group/old-k8s-version/serial/SecondStart 137.53
327 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
328 TestStartStop/group/newest-cni/serial/SecondStart 17.13
329 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
332 TestStartStop/group/newest-cni/serial/Pause 2.53
334 TestStartStop/group/embed-certs/serial/FirstStart 69.8
335 TestStartStop/group/embed-certs/serial/DeployApp 10.24
336 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.79
337 TestStartStop/group/embed-certs/serial/Stop 10.74
338 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
339 TestStartStop/group/embed-certs/serial/SecondStart 263.89
340 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
341 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
342 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
343 TestStartStop/group/old-k8s-version/serial/Pause 2.31
344 TestNetworkPlugins/group/auto/Start 71.16
345 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
346 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
347 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
348 TestStartStop/group/no-preload/serial/Pause 2.32
349 TestNetworkPlugins/group/kindnet/Start 57.17
350 TestNetworkPlugins/group/auto/KubeletFlags 0.31
351 TestNetworkPlugins/group/auto/NetCatPod 9.24
352 TestNetworkPlugins/group/auto/DNS 0.14
353 TestNetworkPlugins/group/auto/Localhost 0.13
354 TestNetworkPlugins/group/auto/HairPin 0.12
355 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
356 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
357 TestNetworkPlugins/group/kindnet/NetCatPod 10.21
358 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
359 TestNetworkPlugins/group/calico/Start 65.49
360 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.07
361 TestNetworkPlugins/group/kindnet/DNS 0.15
362 TestNetworkPlugins/group/kindnet/Localhost 0.11
363 TestNetworkPlugins/group/kindnet/HairPin 0.11
364 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
365 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.42
366 TestNetworkPlugins/group/custom-flannel/Start 48.12
367 TestNetworkPlugins/group/false/Start 70.43
368 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
369 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.25
370 TestNetworkPlugins/group/calico/ControllerPod 6.01
371 TestNetworkPlugins/group/custom-flannel/DNS 0.14
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
374 TestNetworkPlugins/group/calico/KubeletFlags 0.25
375 TestNetworkPlugins/group/calico/NetCatPod 10.18
376 TestNetworkPlugins/group/calico/DNS 0.15
377 TestNetworkPlugins/group/calico/Localhost 0.12
378 TestNetworkPlugins/group/calico/HairPin 0.13
379 TestNetworkPlugins/group/enable-default-cni/Start 71.71
380 TestNetworkPlugins/group/false/KubeletFlags 0.33
381 TestNetworkPlugins/group/false/NetCatPod 9.27
382 TestNetworkPlugins/group/flannel/Start 47.87
383 TestNetworkPlugins/group/false/DNS 0.15
384 TestNetworkPlugins/group/false/Localhost 0.13
385 TestNetworkPlugins/group/false/HairPin 0.12
386 TestNetworkPlugins/group/bridge/Start 43.31
387 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.05
388 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
389 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
390 TestStartStop/group/embed-certs/serial/Pause 2.42
391 TestNetworkPlugins/group/kubenet/Start 67.73
392 TestNetworkPlugins/group/flannel/ControllerPod 6.01
393 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
394 TestNetworkPlugins/group/flannel/NetCatPod 11.2
395 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
396 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.2
397 TestNetworkPlugins/group/flannel/DNS 0.18
398 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
399 TestNetworkPlugins/group/flannel/Localhost 0.12
400 TestNetworkPlugins/group/flannel/HairPin 0.12
401 TestNetworkPlugins/group/bridge/NetCatPod 8.22
402 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
403 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
404 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
405 TestNetworkPlugins/group/bridge/DNS 0.15
406 TestNetworkPlugins/group/bridge/Localhost 0.14
407 TestNetworkPlugins/group/bridge/HairPin 0.14
408 TestNetworkPlugins/group/kubenet/KubeletFlags 0.24
409 TestNetworkPlugins/group/kubenet/NetCatPod 9.17
410 TestNetworkPlugins/group/kubenet/DNS 0.12
411 TestNetworkPlugins/group/kubenet/Localhost 0.11
412 TestNetworkPlugins/group/kubenet/HairPin 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (20.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-016471 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-016471 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (20.386456771s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (20.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0927 16:56:09.376897   17824 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0927 16:56:09.376975   17824 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-11000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-016471
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-016471: exit status 85 (58.122336ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-016471 | jenkins | v1.34.0 | 27 Sep 24 16:55 UTC |          |
	|         | -p download-only-016471        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 16:55:49
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 16:55:49.028019   17836 out.go:345] Setting OutFile to fd 1 ...
	I0927 16:55:49.028132   17836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 16:55:49.028143   17836 out.go:358] Setting ErrFile to fd 2...
	I0927 16:55:49.028148   17836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 16:55:49.028351   17836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11000/.minikube/bin
	W0927 16:55:49.028478   17836 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19712-11000/.minikube/config/config.json: open /home/jenkins/minikube-integration/19712-11000/.minikube/config/config.json: no such file or directory
	I0927 16:55:49.029052   17836 out.go:352] Setting JSON to true
	I0927 16:55:49.030038   17836 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2296,"bootTime":1727453853,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 16:55:49.030151   17836 start.go:139] virtualization: kvm guest
	I0927 16:55:49.032974   17836 out.go:97] [download-only-016471] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0927 16:55:49.033174   17836 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19712-11000/.minikube/cache/preloaded-tarball: no such file or directory
	I0927 16:55:49.033244   17836 notify.go:220] Checking for updates...
	I0927 16:55:49.035010   17836 out.go:169] MINIKUBE_LOCATION=19712
	I0927 16:55:49.036517   17836 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 16:55:49.038031   17836 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19712-11000/kubeconfig
	I0927 16:55:49.039520   17836 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11000/.minikube
	I0927 16:55:49.040767   17836 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0927 16:55:49.043272   17836 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0927 16:55:49.043576   17836 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 16:55:49.065181   17836 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 16:55:49.065278   17836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 16:55:49.438816   17836 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-27 16:55:49.429892403 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647923200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0927 16:55:49.438923   17836 docker.go:318] overlay module found
	I0927 16:55:49.440975   17836 out.go:97] Using the docker driver based on user configuration
	I0927 16:55:49.441010   17836 start.go:297] selected driver: docker
	I0927 16:55:49.441019   17836 start.go:901] validating driver "docker" against <nil>
	I0927 16:55:49.441093   17836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 16:55:49.489921   17836 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-27 16:55:49.481915868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647923200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0927 16:55:49.490086   17836 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 16:55:49.490602   17836 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0927 16:55:49.490773   17836 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 16:55:49.492570   17836 out.go:169] Using Docker driver with root privileges
	I0927 16:55:49.493633   17836 cni.go:84] Creating CNI manager for ""
	I0927 16:55:49.493697   17836 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0927 16:55:49.493775   17836 start.go:340] cluster config:
	{Name:download-only-016471 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-016471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 16:55:49.495035   17836 out.go:97] Starting "download-only-016471" primary control-plane node in "download-only-016471" cluster
	I0927 16:55:49.495052   17836 cache.go:121] Beginning downloading kic base image for docker with docker
	I0927 16:55:49.496259   17836 out.go:97] Pulling base image v0.0.45-1727108449-19696 ...
	I0927 16:55:49.496285   17836 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0927 16:55:49.496388   17836 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 16:55:49.511402   17836 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 16:55:49.511555   17836 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0927 16:55:49.511638   17836 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 16:55:49.605147   17836 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0927 16:55:49.605190   17836 cache.go:56] Caching tarball of preloaded images
	I0927 16:55:49.605342   17836 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0927 16:55:49.607270   17836 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0927 16:55:49.607289   17836 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0927 16:55:49.721247   17836 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19712-11000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0927 16:56:05.717750   17836 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0927 16:56:07.670356   17836 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0927 16:56:07.670456   17836 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19712-11000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0927 16:56:08.443450   17836 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0927 16:56:08.443784   17836 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/download-only-016471/config.json ...
	I0927 16:56:08.443814   17836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/download-only-016471/config.json: {Name:mk298d031d37b33ab6f7d58e837e0a92b00fe455 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:56:08.444003   17836 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0927 16:56:08.444211   17836 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19712-11000/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-016471 host does not exist
	  To start a cluster, run: "minikube start -p download-only-016471"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-016471
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (10.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-131238 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-131238 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (10.84523035s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (10.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0927 16:56:20.601687   17824 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0927 16:56:20.601728   17824 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-11000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-131238
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-131238: exit status 85 (58.263721ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-016471 | jenkins | v1.34.0 | 27 Sep 24 16:55 UTC |                     |
	|         | -p download-only-016471        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 27 Sep 24 16:56 UTC | 27 Sep 24 16:56 UTC |
	| delete  | -p download-only-016471        | download-only-016471 | jenkins | v1.34.0 | 27 Sep 24 16:56 UTC | 27 Sep 24 16:56 UTC |
	| start   | -o=json --download-only        | download-only-131238 | jenkins | v1.34.0 | 27 Sep 24 16:56 UTC |                     |
	|         | -p download-only-131238        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 16:56:09
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 16:56:09.793716   18233 out.go:345] Setting OutFile to fd 1 ...
	I0927 16:56:09.793855   18233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 16:56:09.793866   18233 out.go:358] Setting ErrFile to fd 2...
	I0927 16:56:09.793872   18233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 16:56:09.794062   18233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11000/.minikube/bin
	I0927 16:56:09.794667   18233 out.go:352] Setting JSON to true
	I0927 16:56:09.795515   18233 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2317,"bootTime":1727453853,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 16:56:09.795620   18233 start.go:139] virtualization: kvm guest
	I0927 16:56:09.797922   18233 out.go:97] [download-only-131238] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 16:56:09.798050   18233 notify.go:220] Checking for updates...
	I0927 16:56:09.799629   18233 out.go:169] MINIKUBE_LOCATION=19712
	I0927 16:56:09.801165   18233 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 16:56:09.802705   18233 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19712-11000/kubeconfig
	I0927 16:56:09.804045   18233 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11000/.minikube
	I0927 16:56:09.805430   18233 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0927 16:56:09.808059   18233 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0927 16:56:09.808270   18233 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 16:56:09.829396   18233 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 16:56:09.829467   18233 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 16:56:09.878689   18233 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-27 16:56:09.869953656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647923200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0927 16:56:09.878792   18233 docker.go:318] overlay module found
	I0927 16:56:09.880601   18233 out.go:97] Using the docker driver based on user configuration
	I0927 16:56:09.880639   18233 start.go:297] selected driver: docker
	I0927 16:56:09.880645   18233 start.go:901] validating driver "docker" against <nil>
	I0927 16:56:09.880737   18233 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 16:56:09.929691   18233 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-27 16:56:09.918841526 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647923200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0927 16:56:09.929842   18233 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 16:56:09.930314   18233 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0927 16:56:09.930442   18233 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 16:56:09.932257   18233 out.go:169] Using Docker driver with root privileges
	I0927 16:56:09.933404   18233 cni.go:84] Creating CNI manager for ""
	I0927 16:56:09.933466   18233 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 16:56:09.933476   18233 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 16:56:09.933533   18233 start.go:340] cluster config:
	{Name:download-only-131238 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-131238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 16:56:09.934905   18233 out.go:97] Starting "download-only-131238" primary control-plane node in "download-only-131238" cluster
	I0927 16:56:09.934917   18233 cache.go:121] Beginning downloading kic base image for docker with docker
	I0927 16:56:09.936039   18233 out.go:97] Pulling base image v0.0.45-1727108449-19696 ...
	I0927 16:56:09.936057   18233 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 16:56:09.936102   18233 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 16:56:09.951083   18233 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 16:56:09.951183   18233 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0927 16:56:09.951199   18233 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0927 16:56:09.951203   18233 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0927 16:56:09.951216   18233 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0927 16:56:10.050058   18233 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0927 16:56:10.050085   18233 cache.go:56] Caching tarball of preloaded images
	I0927 16:56:10.050240   18233 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 16:56:10.052210   18233 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0927 16:56:10.052228   18233 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0927 16:56:10.158002   18233 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /home/jenkins/minikube-integration/19712-11000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0927 16:56:18.884816   18233 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0927 16:56:18.884927   18233 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19712-11000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0927 16:56:19.538799   18233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 16:56:19.539165   18233 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/download-only-131238/config.json ...
	I0927 16:56:19.539205   18233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/download-only-131238/config.json: {Name:mkd29551855e84ffb8c4f35320be6d9a9bb8af81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:56:19.540387   18233 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 16:56:19.540626   18233 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19712-11000/.minikube/cache/linux/amd64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-131238 host does not exist
	  To start a cluster, run: "minikube start -p download-only-131238"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-131238
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.98s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-229474 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-229474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-229474
--- PASS: TestDownloadOnlyKic (0.98s)

                                                
                                    
x
+
TestBinaryMirror (0.75s)

                                                
                                                
=== RUN   TestBinaryMirror
I0927 16:56:22.208685   17824 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-269269 --alsologtostderr --binary-mirror http://127.0.0.1:36853 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-269269" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-269269
--- PASS: TestBinaryMirror (0.75s)

                                                
                                    
x
+
TestOffline (44.43s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-262319 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-262319 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (42.309346972s)
helpers_test.go:175: Cleaning up "offline-docker-262319" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-262319
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-262319: (2.118058837s)
--- PASS: TestOffline (44.43s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-393052
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-393052: exit status 85 (48.657715ms)

                                                
                                                
-- stdout --
	* Profile "addons-393052" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-393052"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-393052
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-393052: exit status 85 (49.03969ms)

                                                
                                                
-- stdout --
	* Profile "addons-393052" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-393052"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (210.63s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-393052 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-393052 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m30.629620745s)
--- PASS: TestAddons/Setup (210.63s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:843: volcano-admission stabilized in 11.81376ms
addons_test.go:835: volcano-scheduler stabilized in 11.904414ms
addons_test.go:851: volcano-controller stabilized in 11.934713ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-c45j5" [dc7baf4a-b456-4a0a-b461-500e35fe183e] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003643218s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-wclls" [b374fc4e-953b-40d0-a25c-61e68d0fb085] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003651937s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-d5swp" [1cf25a1a-b8fd-4f28-8ecc-14208f56821a] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.003915944s
addons_test.go:870: (dbg) Run:  kubectl --context addons-393052 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-393052 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-393052 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [3b0714fa-4221-44ca-a3d5-008163a75577] Pending
helpers_test.go:344: "test-job-nginx-0" [3b0714fa-4221-44ca-a3d5-008163a75577] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [3b0714fa-4221-44ca-a3d5-008163a75577] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.004220975s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p addons-393052 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p addons-393052 addons disable volcano --alsologtostderr -v=1: (10.877248975s)
--- PASS: TestAddons/serial/Volcano (42.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-393052 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-393052 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-393052 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-393052 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-393052 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [de830b21-f98b-45b9-8c1e-0df2def4039a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [de830b21-f98b-45b9-8c1e-0df2def4039a] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003195773s
I0927 17:09:11.544212   17824 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-393052 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-393052 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-393052 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-393052 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-393052 addons disable ingress-dns --alsologtostderr -v=1: (1.21017413s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-393052 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-393052 addons disable ingress --alsologtostderr -v=1: (7.876849402s)
--- PASS: TestAddons/parallel/Ingress (21.26s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.59s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lf7jr" [13e84388-d980-4d88-90a6-fc07d18904b6] Running
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004237944s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-393052
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-393052: (5.585106984s)
--- PASS: TestAddons/parallel/InspektorGadget (10.59s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 3.929229ms
I0927 17:08:38.357662   17824 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0927 17:08:38.357684   17824 kapi.go:107] duration metric: took 4.517382ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-vccps" [1e0b8e4f-fed1-4898-8e4f-a4225ff47189] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003376759s
addons_test.go:413: (dbg) Run:  kubectl --context addons-393052 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-393052 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.70s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0927 17:08:38.353175   17824 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 4.530316ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-393052 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-393052 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4e1b9fcd-963b-4530-bf80-67cf974013fd] Pending
helpers_test.go:344: "task-pv-pod" [4e1b9fcd-963b-4530-bf80-67cf974013fd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4e1b9fcd-963b-4530-bf80-67cf974013fd] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003302914s
addons_test.go:528: (dbg) Run:  kubectl --context addons-393052 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-393052 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-393052 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-393052 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-393052 delete pod task-pv-pod: (1.308398497s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-393052 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-393052 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-393052 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a8017938-a262-4009-90b9-36f4fcd387ca] Pending
helpers_test.go:344: "task-pv-pod-restore" [a8017938-a262-4009-90b9-36f4fcd387ca] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a8017938-a262-4009-90b9-36f4fcd387ca] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003086991s
addons_test.go:570: (dbg) Run:  kubectl --context addons-393052 delete pod task-pv-pod-restore
addons_test.go:570: (dbg) Done: kubectl --context addons-393052 delete pod task-pv-pod-restore: (1.054836565s)
addons_test.go:574: (dbg) Run:  kubectl --context addons-393052 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-393052 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-393052 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-393052 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.485632476s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-393052 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.69s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-393052 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-8dwl8" [41a7138d-e7bc-4ac8-a4f4-d374aa89b8b6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-8dwl8" [41a7138d-e7bc-4ac8-a4f4-d374aa89b8b6] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.002644624s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-393052 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-393052 addons disable headlamp --alsologtostderr -v=1: (5.640496292s)
--- PASS: TestAddons/parallel/Headlamp (16.35s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-mxzpw" [7a6ff2df-9507-4fb7-8004-cee3dc826b7e] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003315606s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-393052
--- PASS: TestAddons/parallel/CloudSpanner (5.40s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.87s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-393052 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-393052 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-393052 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [2d60fbcb-2fbc-431c-a269-26eb4ec1565a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [2d60fbcb-2fbc-431c-a269-26eb4ec1565a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [2d60fbcb-2fbc-431c-a269-26eb4ec1565a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003424048s
addons_test.go:938: (dbg) Run:  kubectl --context addons-393052 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-393052 ssh "cat /opt/local-path-provisioner/pvc-cc992d79-9229-48dc-815e-b7a98bf6633a_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-393052 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-393052 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-393052 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-amd64 -p addons-393052 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.070590592s)
--- PASS: TestAddons/parallel/LocalPath (53.87s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.4s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-f5t54" [b23a243e-32fe-4cd7-979d-e8f6bd767c6c] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003665794s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-393052
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.40s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-tnmkc" [47150af3-0ed0-4a4b-a05c-7b4f24f8432a] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003681001s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-393052 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-393052 addons disable yakd --alsologtostderr -v=1: (5.632838886s)
--- PASS: TestAddons/parallel/Yakd (10.64s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.99s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-393052
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-393052: (10.757637123s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-393052
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-393052
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-393052
--- PASS: TestAddons/StoppedEnableDisable (10.99s)

                                                
                                    
x
+
TestCertOptions (30.63s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-825446 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-825446 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (27.68919817s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-825446 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-825446 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-825446 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-825446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-825446
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-825446: (2.12958146s)
--- PASS: TestCertOptions (30.63s)

                                                
                                    
x
+
TestCertExpiration (234.51s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-381364 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-381364 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (31.964931148s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-381364 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-381364 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (20.279947646s)
helpers_test.go:175: Cleaning up "cert-expiration-381364" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-381364
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-381364: (2.262699055s)
--- PASS: TestCertExpiration (234.51s)

                                                
                                    
x
+
TestDockerFlags (25.58s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-937294 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-937294 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (22.863730446s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-937294 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-937294 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-937294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-937294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-937294: (2.166703237s)
--- PASS: TestDockerFlags (25.58s)

                                                
                                    
x
+
TestForceSystemdFlag (37.19s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-296604 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-296604 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (34.647557119s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-296604 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-296604" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-296604
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-296604: (2.164879083s)
--- PASS: TestForceSystemdFlag (37.19s)

                                                
                                    
x
+
TestForceSystemdEnv (27.61s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-289167 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0927 17:46:41.102228   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/skaffold-626241/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:46:41.108612   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/skaffold-626241/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:46:41.120047   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/skaffold-626241/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:46:41.141484   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/skaffold-626241/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:46:41.182924   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/skaffold-626241/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:46:41.264359   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/skaffold-626241/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:46:41.426597   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/skaffold-626241/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:46:41.748419   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/skaffold-626241/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:46:42.389957   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/skaffold-626241/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:46:43.672267   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/skaffold-626241/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-289167 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (22.784389084s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-289167 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-289167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-289167
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-289167: (4.469947602s)
--- PASS: TestForceSystemdEnv (27.61s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.11s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0927 17:45:52.424296   17824 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0927 17:45:52.424461   17824 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0927 17:45:52.455551   17824 install.go:62] docker-machine-driver-kvm2: exit status 1
W0927 17:45:52.455955   17824 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0927 17:45:52.456023   17824 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1595114204/001/docker-machine-driver-kvm2
I0927 17:45:52.701889   17824 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1595114204/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4670640 0x4670640 0x4670640 0x4670640 0x4670640 0x4670640 0x4670640] Decompressors:map[bz2:0xc0003f22d0 gz:0xc0003f22d8 tar:0xc0003f21c0 tar.bz2:0xc0003f21e0 tar.gz:0xc0003f21f0 tar.xz:0xc0003f2250 tar.zst:0xc0003f2280 tbz2:0xc0003f21e0 tgz:0xc0003f21f0 txz:0xc0003f2250 tzst:0xc0003f2280 xz:0xc0003f22f0 zip:0xc0003f2300 zst:0xc0003f22f8] Getters:map[file:0xc000a606a0 http:0xc0000e83c0 https:0xc0000e8410] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0927 17:45:52.701931   17824 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1595114204/001/docker-machine-driver-kvm2
I0927 17:45:55.443262   17824 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0927 17:45:55.443410   17824 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0927 17:45:55.483742   17824 install.go:137] /home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0927 17:45:55.483782   17824 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0927 17:45:55.483866   17824 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0927 17:45:55.483894   17824 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1595114204/002/docker-machine-driver-kvm2
I0927 17:45:55.544074   17824 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1595114204/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4670640 0x4670640 0x4670640 0x4670640 0x4670640 0x4670640 0x4670640] Decompressors:map[bz2:0xc0003f22d0 gz:0xc0003f22d8 tar:0xc0003f21c0 tar.bz2:0xc0003f21e0 tar.gz:0xc0003f21f0 tar.xz:0xc0003f2250 tar.zst:0xc0003f2280 tbz2:0xc0003f21e0 tgz:0xc0003f21f0 txz:0xc0003f2250 tzst:0xc0003f2280 xz:0xc0003f22f0 zip:0xc0003f2300 zst:0xc0003f22f8] Getters:map[file:0xc00193ea80 http:0xc00001d4f0 https:0xc00001d540] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0927 17:45:55.544135   17824 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1595114204/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (5.11s)

                                                
                                    
x
+
TestErrorSpam/setup (23.98s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-319960 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-319960 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-319960 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-319960 --driver=docker  --container-runtime=docker: (23.976652228s)
--- PASS: TestErrorSpam/setup (23.98s)

                                                
                                    
x
+
TestErrorSpam/start (0.54s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319960 --log_dir /tmp/nospam-319960 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319960 --log_dir /tmp/nospam-319960 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319960 --log_dir /tmp/nospam-319960 start --dry-run
--- PASS: TestErrorSpam/start (0.54s)

                                                
                                    
x
+
TestErrorSpam/status (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319960 --log_dir /tmp/nospam-319960 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319960 --log_dir /tmp/nospam-319960 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319960 --log_dir /tmp/nospam-319960 status
--- PASS: TestErrorSpam/status (0.82s)

                                                
                                    
x
+
TestErrorSpam/pause (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319960 --log_dir /tmp/nospam-319960 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319960 --log_dir /tmp/nospam-319960 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319960 --log_dir /tmp/nospam-319960 pause
--- PASS: TestErrorSpam/pause (1.11s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319960 --log_dir /tmp/nospam-319960 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319960 --log_dir /tmp/nospam-319960 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319960 --log_dir /tmp/nospam-319960 unpause
--- PASS: TestErrorSpam/unpause (1.36s)

                                                
                                    
x
+
TestErrorSpam/stop (10.86s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319960 --log_dir /tmp/nospam-319960 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-319960 --log_dir /tmp/nospam-319960 stop: (10.682470391s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319960 --log_dir /tmp/nospam-319960 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-319960 --log_dir /tmp/nospam-319960 stop
--- PASS: TestErrorSpam/stop (10.86s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19712-11000/.minikube/files/etc/test/nested/copy/17824/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (32.56s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-712810 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-712810 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (32.557225716s)
--- PASS: TestFunctional/serial/StartWithProxy (32.56s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.21s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0927 17:11:25.681183   17824 config.go:182] Loaded profile config "functional-712810": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-712810 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-712810 --alsologtostderr -v=8: (40.212179224s)
functional_test.go:663: soft start took 40.212895512s for "functional-712810" cluster.
I0927 17:12:05.893714   17824 config.go:182] Loaded profile config "functional-712810": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (40.21s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-712810 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-712810 /tmp/TestFunctionalserialCacheCmdcacheadd_local3796423340/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 cache add minikube-local-cache-test:functional-712810
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-712810 cache add minikube-local-cache-test:functional-712810: (1.08982545s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 cache delete minikube-local-cache-test:functional-712810
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-712810
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-712810 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (257.291181ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 kubectl -- --context functional-712810 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-712810 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-712810 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-712810 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.066852085s)
functional_test.go:761: restart took 40.067026914s for "functional-712810" cluster.
I0927 17:12:51.829508   17824 config.go:182] Loaded profile config "functional-712810": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (40.07s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-712810 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 logs
--- PASS: TestFunctional/serial/LogsCmd (0.96s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 logs --file /tmp/TestFunctionalserialLogsFileCmd1159275096/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-712810 logs --file /tmp/TestFunctionalserialLogsFileCmd1159275096/001/logs.txt: (1.012214033s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.67s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-712810 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-712810
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-712810: exit status 115 (304.390278ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31523 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-712810 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-712810 delete -f testdata/invalidsvc.yaml: (1.194620593s)
--- PASS: TestFunctional/serial/InvalidService (4.67s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-712810 config get cpus: exit status 14 (59.835269ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-712810 config get cpus: exit status 14 (47.353742ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-712810 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-712810 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 74143: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.82s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-712810 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-712810 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (165.438199ms)

                                                
                                                
-- stdout --
	* [functional-712810] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19712
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19712-11000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 17:13:01.501340   73510 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:13:01.501664   73510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:13:01.501678   73510 out.go:358] Setting ErrFile to fd 2...
	I0927 17:13:01.501685   73510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:13:01.502035   73510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11000/.minikube/bin
	I0927 17:13:01.502750   73510 out.go:352] Setting JSON to false
	I0927 17:13:01.504150   73510 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3328,"bootTime":1727453853,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 17:13:01.504242   73510 start.go:139] virtualization: kvm guest
	I0927 17:13:01.506720   73510 out.go:177] * [functional-712810] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 17:13:01.508227   73510 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 17:13:01.508242   73510 notify.go:220] Checking for updates...
	I0927 17:13:01.511153   73510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 17:13:01.512758   73510 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-11000/kubeconfig
	I0927 17:13:01.515997   73510 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11000/.minikube
	I0927 17:13:01.517540   73510 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 17:13:01.518846   73510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 17:13:01.520688   73510 config.go:182] Loaded profile config "functional-712810": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 17:13:01.521151   73510 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 17:13:01.546875   73510 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 17:13:01.546985   73510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 17:13:01.604586   73510 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-27 17:13:01.592116982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647923200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0927 17:13:01.604700   73510 docker.go:318] overlay module found
	I0927 17:13:01.607168   73510 out.go:177] * Using the docker driver based on existing profile
	I0927 17:13:01.608630   73510 start.go:297] selected driver: docker
	I0927 17:13:01.608652   73510 start.go:901] validating driver "docker" against &{Name:functional-712810 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-712810 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:13:01.608735   73510 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 17:13:01.610905   73510 out.go:201] 
	W0927 17:13:01.612768   73510 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0927 17:13:01.615037   73510 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-712810 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-712810 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-712810 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (164.995682ms)

                                                
                                                
-- stdout --
	* [functional-712810] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19712
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19712-11000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 17:13:01.334147   73366 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:13:01.334278   73366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:13:01.334289   73366 out.go:358] Setting ErrFile to fd 2...
	I0927 17:13:01.334296   73366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:13:01.334643   73366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11000/.minikube/bin
	I0927 17:13:01.335345   73366 out.go:352] Setting JSON to false
	I0927 17:13:01.336799   73366 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3328,"bootTime":1727453853,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 17:13:01.336963   73366 start.go:139] virtualization: kvm guest
	I0927 17:13:01.339551   73366 out.go:177] * [functional-712810] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0927 17:13:01.340978   73366 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 17:13:01.341037   73366 notify.go:220] Checking for updates...
	I0927 17:13:01.343571   73366 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 17:13:01.345168   73366 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-11000/kubeconfig
	I0927 17:13:01.346478   73366 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11000/.minikube
	I0927 17:13:01.347748   73366 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 17:13:01.349076   73366 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 17:13:01.350614   73366 config.go:182] Loaded profile config "functional-712810": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 17:13:01.351151   73366 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 17:13:01.381317   73366 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 17:13:01.381403   73366 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 17:13:01.439426   73366 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-27 17:13:01.428524382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647923200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0927 17:13:01.439548   73366 docker.go:318] overlay module found
	I0927 17:13:01.441486   73366 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0927 17:13:01.442788   73366 start.go:297] selected driver: docker
	I0927 17:13:01.442806   73366 start.go:901] validating driver "docker" against &{Name:functional-712810 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-712810 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:13:01.442924   73366 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 17:13:01.445535   73366 out.go:201] 
	W0927 17:13:01.446885   73366 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0927 17:13:01.448238   73366 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-712810 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-712810 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-z6lx9" [984fd037-7207-4b92-a3bc-db6f1efb3405] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-z6lx9" [984fd037-7207-4b92-a3bc-db6f1efb3405] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004018488s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31813
functional_test.go:1675: http://192.168.49.2:31813: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-z6lx9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31813
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (43.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [dad35beb-9624-42ff-b652-73720d450452] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003658564s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-712810 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-712810 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-712810 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-712810 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c0755251-757e-41f2-9d42-6c2e3d464e3f] Pending
helpers_test.go:344: "sp-pod" [c0755251-757e-41f2-9d42-6c2e3d464e3f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c0755251-757e-41f2-9d42-6c2e3d464e3f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.004039833s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-712810 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-712810 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-712810 delete -f testdata/storage-provisioner/pod.yaml: (1.326363296s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-712810 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4e6b044f-cd7b-4892-bb9e-93870641b6c5] Pending
helpers_test.go:344: "sp-pod" [4e6b044f-cd7b-4892-bb9e-93870641b6c5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4e6b044f-cd7b-4892-bb9e-93870641b6c5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.003703633s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-712810 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (43.21s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh -n functional-712810 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 cp functional-712810:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3748049962/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh -n functional-712810 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh -n functional-712810 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-712810 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-s67l2" [888eb2df-e7f9-410e-8349-a9679f818a9d] Pending
helpers_test.go:344: "mysql-6cdb49bbb-s67l2" [888eb2df-e7f9-410e-8349-a9679f818a9d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-s67l2" [888eb2df-e7f9-410e-8349-a9679f818a9d] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.005285619s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-712810 exec mysql-6cdb49bbb-s67l2 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-712810 exec mysql-6cdb49bbb-s67l2 -- mysql -ppassword -e "show databases;": exit status 1 (111.637856ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0927 17:13:41.825798   17824 retry.go:31] will retry after 1.173415954s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-712810 exec mysql-6cdb49bbb-s67l2 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-712810 exec mysql-6cdb49bbb-s67l2 -- mysql -ppassword -e "show databases;": exit status 1 (105.248504ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0927 17:13:43.105446   17824 retry.go:31] will retry after 2.158214323s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-712810 exec mysql-6cdb49bbb-s67l2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.83s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/17824/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh "sudo cat /etc/test/nested/copy/17824/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/17824.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh "sudo cat /etc/ssl/certs/17824.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/17824.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh "sudo cat /usr/share/ca-certificates/17824.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/178242.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh "sudo cat /etc/ssl/certs/178242.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/178242.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh "sudo cat /usr/share/ca-certificates/178242.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-712810 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-712810 ssh "sudo systemctl is-active crio": exit status 1 (286.748062ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-712810 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-712810 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-6nd4n" [a3b661e5-3717-496f-a884-51acf487e909] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-6nd4n" [a3b661e5-3717-496f-a884-51acf487e909] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004919519s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "325.755552ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "46.155932ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "450.650403ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "56.556139ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-712810 /tmp/TestFunctionalparallelMountCmdany-port2930925207/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727457180130775329" to /tmp/TestFunctionalparallelMountCmdany-port2930925207/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727457180130775329" to /tmp/TestFunctionalparallelMountCmdany-port2930925207/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727457180130775329" to /tmp/TestFunctionalparallelMountCmdany-port2930925207/001/test-1727457180130775329
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-712810 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (304.58858ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 17:13:00.435719   17824 retry.go:31] will retry after 565.285769ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 27 17:13 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 27 17:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 27 17:13 test-1727457180130775329
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh cat /mount-9p/test-1727457180130775329
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-712810 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5dd06b9c-0f57-4ed5-af52-bc6f0e4a4a19] Pending
helpers_test.go:344: "busybox-mount" [5dd06b9c-0f57-4ed5-af52-bc6f0e4a4a19] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5dd06b9c-0f57-4ed5-af52-bc6f0e4a4a19] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5dd06b9c-0f57-4ed5-af52-bc6f0e4a4a19] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.0043543s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-712810 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-712810 /tmp/TestFunctionalparallelMountCmdany-port2930925207/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-712810 /tmp/TestFunctionalparallelMountCmdspecific-port3754794642/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-712810 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (236.852916ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 17:13:08.129149   17824 retry.go:31] will retry after 260.504508ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-712810 /tmp/TestFunctionalparallelMountCmdspecific-port3754794642/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-712810 ssh "sudo umount -f /mount-9p": exit status 1 (291.54879ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-712810 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-712810 /tmp/TestFunctionalparallelMountCmdspecific-port3754794642/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 service list -o json
functional_test.go:1494: Took "597.308784ms" to run "out/minikube-linux-amd64 -p functional-712810 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-712810 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3962513932/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-712810 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3962513932/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-712810 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3962513932/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-712810 ssh "findmnt -T" /mount1: exit status 1 (337.537833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 17:13:09.746325   17824 retry.go:31] will retry after 465.484805ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-712810 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-712810 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3962513932/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-712810 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3962513932/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-712810 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3962513932/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30732
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30732
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-712810 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-712810
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-712810
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-712810 image ls --format short --alsologtostderr:
I0927 17:13:21.629030   79966 out.go:345] Setting OutFile to fd 1 ...
I0927 17:13:21.629155   79966 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:13:21.629166   79966 out.go:358] Setting ErrFile to fd 2...
I0927 17:13:21.629173   79966 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:13:21.629465   79966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11000/.minikube/bin
I0927 17:13:21.630309   79966 config.go:182] Loaded profile config "functional-712810": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 17:13:21.630468   79966 config.go:182] Loaded profile config "functional-712810": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 17:13:21.630874   79966 cli_runner.go:164] Run: docker container inspect functional-712810 --format={{.State.Status}}
I0927 17:13:21.651958   79966 ssh_runner.go:195] Run: systemctl --version
I0927 17:13:21.652011   79966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-712810
I0927 17:13:21.672893   79966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/functional-712810/id_rsa Username:docker}
I0927 17:13:21.781052   79966 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-712810 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| localhost/my-image                          | functional-712810 | fc707d1d33a95 | 1.24MB |
| docker.io/library/nginx                     | latest            | 9527c0f683c3b | 188MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kicbase/echo-server               | functional-712810 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-712810 | cfb4aeb2a2cc2 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-712810 image ls --format table --alsologtostderr:
I0927 17:13:26.909824   80685 out.go:345] Setting OutFile to fd 1 ...
I0927 17:13:26.909939   80685 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:13:26.909948   80685 out.go:358] Setting ErrFile to fd 2...
I0927 17:13:26.909954   80685 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:13:26.910241   80685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11000/.minikube/bin
I0927 17:13:26.910825   80685 config.go:182] Loaded profile config "functional-712810": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 17:13:26.910941   80685 config.go:182] Loaded profile config "functional-712810": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 17:13:26.911476   80685 cli_runner.go:164] Run: docker container inspect functional-712810 --format={{.State.Status}}
I0927 17:13:26.930297   80685 ssh_runner.go:195] Run: systemctl --version
I0927 17:13:26.930361   80685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-712810
I0927 17:13:26.949033   80685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/functional-712810/id_rsa Username:docker}
I0927 17:13:27.037318   80685 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-712810 image ls --format json --alsologtostderr:
[{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"fc707d1d33a9550a4510aa60c67e41af7250c6d74049ab0c8d927e622e16bb4c","repoDigests":[],"repoTags":["localhost/my-image:functional-712810"],"size":"1240000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"0184c1613d92931126feb4c548e5da11015
513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"cfb4aeb2a2cc2b0743e21f168025023cfbaaaa256c28e88ea921981eaa473d6a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-712810"],"size":"30"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"9527c0f683c3b2f0465019f9f5456f01a0fc0d4d274466831b9910a21d0302cd","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],
"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-712810"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"
],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-712810 image ls --format json --alsologtostderr:
I0927 17:13:26.668276   80633 out.go:345] Setting OutFile to fd 1 ...
I0927 17:13:26.668597   80633 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:13:26.668607   80633 out.go:358] Setting ErrFile to fd 2...
I0927 17:13:26.668612   80633 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:13:26.668866   80633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11000/.minikube/bin
I0927 17:13:26.669711   80633 config.go:182] Loaded profile config "functional-712810": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 17:13:26.669865   80633 config.go:182] Loaded profile config "functional-712810": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 17:13:26.670390   80633 cli_runner.go:164] Run: docker container inspect functional-712810 --format={{.State.Status}}
I0927 17:13:26.691042   80633 ssh_runner.go:195] Run: systemctl --version
I0927 17:13:26.691093   80633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-712810
I0927 17:13:26.711770   80633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/functional-712810/id_rsa Username:docker}
I0927 17:13:26.824419   80633 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-712810 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 9527c0f683c3b2f0465019f9f5456f01a0fc0d4d274466831b9910a21d0302cd
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-712810
size: "4940000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: cfb4aeb2a2cc2b0743e21f168025023cfbaaaa256c28e88ea921981eaa473d6a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-712810
size: "30"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-712810 image ls --format yaml --alsologtostderr:
I0927 17:13:21.892817   80015 out.go:345] Setting OutFile to fd 1 ...
I0927 17:13:21.892931   80015 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:13:21.892942   80015 out.go:358] Setting ErrFile to fd 2...
I0927 17:13:21.892949   80015 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:13:21.893214   80015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11000/.minikube/bin
I0927 17:13:21.894066   80015 config.go:182] Loaded profile config "functional-712810": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 17:13:21.894242   80015 config.go:182] Loaded profile config "functional-712810": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 17:13:21.894811   80015 cli_runner.go:164] Run: docker container inspect functional-712810 --format={{.State.Status}}
I0927 17:13:21.915057   80015 ssh_runner.go:195] Run: systemctl --version
I0927 17:13:21.915105   80015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-712810
I0927 17:13:21.932164   80015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/functional-712810/id_rsa Username:docker}
I0927 17:13:22.020216   80015 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-712810 ssh pgrep buildkitd: exit status 1 (248.698499ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 image build -t localhost/my-image:functional-712810 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-712810 image build -t localhost/my-image:functional-712810 testdata/build --alsologtostderr: (4.089504263s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-712810 image build -t localhost/my-image:functional-712810 testdata/build --alsologtostderr:
I0927 17:13:22.345492   80232 out.go:345] Setting OutFile to fd 1 ...
I0927 17:13:22.345764   80232 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:13:22.345774   80232 out.go:358] Setting ErrFile to fd 2...
I0927 17:13:22.345779   80232 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:13:22.345982   80232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11000/.minikube/bin
I0927 17:13:22.346601   80232 config.go:182] Loaded profile config "functional-712810": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 17:13:22.347189   80232 config.go:182] Loaded profile config "functional-712810": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 17:13:22.347637   80232 cli_runner.go:164] Run: docker container inspect functional-712810 --format={{.State.Status}}
I0927 17:13:22.365004   80232 ssh_runner.go:195] Run: systemctl --version
I0927 17:13:22.365062   80232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-712810
I0927 17:13:22.383612   80232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/functional-712810/id_rsa Username:docker}
I0927 17:13:22.468288   80232 build_images.go:161] Building image from path: /tmp/build.2309532235.tar
I0927 17:13:22.468362   80232 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0927 17:13:22.476930   80232 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2309532235.tar
I0927 17:13:22.480188   80232 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2309532235.tar: stat -c "%s %y" /var/lib/minikube/build/build.2309532235.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2309532235.tar': No such file or directory
I0927 17:13:22.480211   80232 ssh_runner.go:362] scp /tmp/build.2309532235.tar --> /var/lib/minikube/build/build.2309532235.tar (3072 bytes)
I0927 17:13:22.502253   80232 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2309532235
I0927 17:13:22.510713   80232 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2309532235 -xf /var/lib/minikube/build/build.2309532235.tar
I0927 17:13:22.519825   80232 docker.go:360] Building image: /var/lib/minikube/build/build.2309532235
I0927 17:13:22.519911   80232 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-712810 /var/lib/minikube/build/build.2309532235
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.5s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:fc707d1d33a9550a4510aa60c67e41af7250c6d74049ab0c8d927e622e16bb4c done
#8 naming to localhost/my-image:functional-712810 done
#8 DONE 0.0s
I0927 17:13:26.362690   80232 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-712810 /var/lib/minikube/build/build.2309532235: (3.842750419s)
I0927 17:13:26.362793   80232 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2309532235
I0927 17:13:26.373786   80232 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2309532235.tar
I0927 17:13:26.384013   80232 build_images.go:217] Built localhost/my-image:functional-712810 from /tmp/build.2309532235.tar
I0927 17:13:26.384049   80232 build_images.go:133] succeeded building to: functional-712810
I0927 17:13:26.384056   80232 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.896664001s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-712810
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 image load --daemon kicbase/echo-server:functional-712810 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 image load --daemon kicbase/echo-server:functional-712810 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
2024/09/27 17:13:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-712810
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 image load --daemon kicbase/echo-server:functional-712810 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-712810 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-712810 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-712810 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 77564: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-712810 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-712810 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-712810 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f824a6dd-1b1f-4b24-9c29-55eb8f948969] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [f824a6dd-1b1f-4b24-9c29-55eb8f948969] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.004469822s
I0927 17:13:29.452076   17824 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 image save kicbase/echo-server:functional-712810 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 image rm kicbase/echo-server:functional-712810 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-712810
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 image save --daemon kicbase/echo-server:functional-712810 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-712810
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-712810 docker-env) && out/minikube-linux-amd64 status -p functional-712810"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-712810 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-712810 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-712810 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.122.52 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-712810 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-712810
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-712810
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-712810
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (96.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-028020 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0927 17:14:53.637086   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:14:53.643496   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:14:53.654972   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:14:53.676521   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:14:53.717954   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:14:53.799417   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:14:53.961035   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:14:54.282663   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:14:54.924724   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:14:56.206542   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:14:58.768016   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:15:03.889641   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:15:14.131422   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-028020 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m36.299005244s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (96.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (44.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-028020 -- rollout status deployment/busybox: (4.638640787s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0927 17:15:30.195939   17824 retry.go:31] will retry after 809.531514ms: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0927 17:15:31.138624   17824 retry.go:31] will retry after 850.045474ms: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0927 17:15:32.107128   17824 retry.go:31] will retry after 2.75011026s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E0927 17:15:34.613462   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0927 17:15:34.969053   17824 retry.go:31] will retry after 4.400219022s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0927 17:15:39.479389   17824 retry.go:31] will retry after 6.281168458s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0927 17:15:45.870859   17824 retry.go:31] will retry after 5.142924072s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0927 17:15:51.130455   17824 retry.go:31] will retry after 16.537410874s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- exec busybox-7dff88458-4d4m9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- exec busybox-7dff88458-7qddp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- exec busybox-7dff88458-g6mb2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- exec busybox-7dff88458-4d4m9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- exec busybox-7dff88458-7qddp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- exec busybox-7dff88458-g6mb2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- exec busybox-7dff88458-4d4m9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- exec busybox-7dff88458-7qddp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- exec busybox-7dff88458-g6mb2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (44.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- exec busybox-7dff88458-4d4m9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- exec busybox-7dff88458-4d4m9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- exec busybox-7dff88458-7qddp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- exec busybox-7dff88458-7qddp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- exec busybox-7dff88458-g6mb2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-028020 -- exec busybox-7dff88458-g6mb2 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-028020 -v=7 --alsologtostderr
E0927 17:16:15.576019   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-028020 -v=7 --alsologtostderr: (19.425275038s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-028020 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 cp testdata/cp-test.txt ha-028020:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 cp ha-028020:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile553290115/001/cp-test_ha-028020.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 cp ha-028020:/home/docker/cp-test.txt ha-028020-m02:/home/docker/cp-test_ha-028020_ha-028020-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m02 "sudo cat /home/docker/cp-test_ha-028020_ha-028020-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 cp ha-028020:/home/docker/cp-test.txt ha-028020-m03:/home/docker/cp-test_ha-028020_ha-028020-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m03 "sudo cat /home/docker/cp-test_ha-028020_ha-028020-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 cp ha-028020:/home/docker/cp-test.txt ha-028020-m04:/home/docker/cp-test_ha-028020_ha-028020-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m04 "sudo cat /home/docker/cp-test_ha-028020_ha-028020-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 cp testdata/cp-test.txt ha-028020-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 cp ha-028020-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile553290115/001/cp-test_ha-028020-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 cp ha-028020-m02:/home/docker/cp-test.txt ha-028020:/home/docker/cp-test_ha-028020-m02_ha-028020.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020 "sudo cat /home/docker/cp-test_ha-028020-m02_ha-028020.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 cp ha-028020-m02:/home/docker/cp-test.txt ha-028020-m03:/home/docker/cp-test_ha-028020-m02_ha-028020-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m03 "sudo cat /home/docker/cp-test_ha-028020-m02_ha-028020-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 cp ha-028020-m02:/home/docker/cp-test.txt ha-028020-m04:/home/docker/cp-test_ha-028020-m02_ha-028020-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m04 "sudo cat /home/docker/cp-test_ha-028020-m02_ha-028020-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 cp testdata/cp-test.txt ha-028020-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 cp ha-028020-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile553290115/001/cp-test_ha-028020-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 cp ha-028020-m03:/home/docker/cp-test.txt ha-028020:/home/docker/cp-test_ha-028020-m03_ha-028020.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020 "sudo cat /home/docker/cp-test_ha-028020-m03_ha-028020.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 cp ha-028020-m03:/home/docker/cp-test.txt ha-028020-m02:/home/docker/cp-test_ha-028020-m03_ha-028020-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m02 "sudo cat /home/docker/cp-test_ha-028020-m03_ha-028020-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 cp ha-028020-m03:/home/docker/cp-test.txt ha-028020-m04:/home/docker/cp-test_ha-028020-m03_ha-028020-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m04 "sudo cat /home/docker/cp-test_ha-028020-m03_ha-028020-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 cp testdata/cp-test.txt ha-028020-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 cp ha-028020-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile553290115/001/cp-test_ha-028020-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 cp ha-028020-m04:/home/docker/cp-test.txt ha-028020:/home/docker/cp-test_ha-028020-m04_ha-028020.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020 "sudo cat /home/docker/cp-test_ha-028020-m04_ha-028020.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 cp ha-028020-m04:/home/docker/cp-test.txt ha-028020-m02:/home/docker/cp-test_ha-028020-m04_ha-028020-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m02 "sudo cat /home/docker/cp-test_ha-028020-m04_ha-028020-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 cp ha-028020-m04:/home/docker/cp-test.txt ha-028020-m03:/home/docker/cp-test_ha-028020-m04_ha-028020-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 ssh -n ha-028020-m03 "sudo cat /home/docker/cp-test_ha-028020-m04_ha-028020-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-028020 node stop m02 -v=7 --alsologtostderr: (10.731940874s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-028020 status -v=7 --alsologtostderr: exit status 7 (630.669475ms)

                                                
                                                
-- stdout --
	ha-028020
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-028020-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-028020-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-028020-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 17:16:57.427898  109085 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:16:57.428047  109085 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:16:57.428057  109085 out.go:358] Setting ErrFile to fd 2...
	I0927 17:16:57.428063  109085 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:16:57.428273  109085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11000/.minikube/bin
	I0927 17:16:57.428478  109085 out.go:352] Setting JSON to false
	I0927 17:16:57.428503  109085 mustload.go:65] Loading cluster: ha-028020
	I0927 17:16:57.428599  109085 notify.go:220] Checking for updates...
	I0927 17:16:57.429051  109085 config.go:182] Loaded profile config "ha-028020": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 17:16:57.429079  109085 status.go:174] checking status of ha-028020 ...
	I0927 17:16:57.429614  109085 cli_runner.go:164] Run: docker container inspect ha-028020 --format={{.State.Status}}
	I0927 17:16:57.449401  109085 status.go:364] ha-028020 host status = "Running" (err=<nil>)
	I0927 17:16:57.449437  109085 host.go:66] Checking if "ha-028020" exists ...
	I0927 17:16:57.449741  109085 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-028020
	I0927 17:16:57.469235  109085 host.go:66] Checking if "ha-028020" exists ...
	I0927 17:16:57.469591  109085 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 17:16:57.469653  109085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-028020
	I0927 17:16:57.488973  109085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/ha-028020/id_rsa Username:docker}
	I0927 17:16:57.577039  109085 ssh_runner.go:195] Run: systemctl --version
	I0927 17:16:57.581037  109085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 17:16:57.591937  109085 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 17:16:57.643317  109085 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-27 17:16:57.633877292 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647923200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0927 17:16:57.643965  109085 kubeconfig.go:125] found "ha-028020" server: "https://192.168.49.254:8443"
	I0927 17:16:57.643996  109085 api_server.go:166] Checking apiserver status ...
	I0927 17:16:57.644040  109085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 17:16:57.655623  109085 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2322/cgroup
	I0927 17:16:57.664482  109085 api_server.go:182] apiserver freezer: "12:freezer:/docker/46b0b57715e6ae4ab626e756cebd79c5fada919b4a18ff138e0b58f6c4703e00/kubepods/burstable/podb0ba61d1d6b751a636213cebb7b173db/a4df84bb551f25903c5d6ccb518ecaffbe2d619d79a81cafc3935d7973e66eae"
	I0927 17:16:57.664548  109085 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/46b0b57715e6ae4ab626e756cebd79c5fada919b4a18ff138e0b58f6c4703e00/kubepods/burstable/podb0ba61d1d6b751a636213cebb7b173db/a4df84bb551f25903c5d6ccb518ecaffbe2d619d79a81cafc3935d7973e66eae/freezer.state
	I0927 17:16:57.672890  109085 api_server.go:204] freezer state: "THAWED"
	I0927 17:16:57.672954  109085 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0927 17:16:57.677100  109085 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0927 17:16:57.677125  109085 status.go:456] ha-028020 apiserver status = Running (err=<nil>)
	I0927 17:16:57.677137  109085 status.go:176] ha-028020 status: &{Name:ha-028020 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 17:16:57.677156  109085 status.go:174] checking status of ha-028020-m02 ...
	I0927 17:16:57.677461  109085 cli_runner.go:164] Run: docker container inspect ha-028020-m02 --format={{.State.Status}}
	I0927 17:16:57.695636  109085 status.go:364] ha-028020-m02 host status = "Stopped" (err=<nil>)
	I0927 17:16:57.695657  109085 status.go:377] host is not running, skipping remaining checks
	I0927 17:16:57.695664  109085 status.go:176] ha-028020-m02 status: &{Name:ha-028020-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 17:16:57.695682  109085 status.go:174] checking status of ha-028020-m03 ...
	I0927 17:16:57.695980  109085 cli_runner.go:164] Run: docker container inspect ha-028020-m03 --format={{.State.Status}}
	I0927 17:16:57.712555  109085 status.go:364] ha-028020-m03 host status = "Running" (err=<nil>)
	I0927 17:16:57.712585  109085 host.go:66] Checking if "ha-028020-m03" exists ...
	I0927 17:16:57.712846  109085 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-028020-m03
	I0927 17:16:57.730531  109085 host.go:66] Checking if "ha-028020-m03" exists ...
	I0927 17:16:57.730789  109085 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 17:16:57.730825  109085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-028020-m03
	I0927 17:16:57.747988  109085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/ha-028020-m03/id_rsa Username:docker}
	I0927 17:16:57.829077  109085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 17:16:57.839294  109085 kubeconfig.go:125] found "ha-028020" server: "https://192.168.49.254:8443"
	I0927 17:16:57.839322  109085 api_server.go:166] Checking apiserver status ...
	I0927 17:16:57.839351  109085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 17:16:57.849330  109085 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2255/cgroup
	I0927 17:16:57.857645  109085 api_server.go:182] apiserver freezer: "12:freezer:/docker/c2fbf6b8f191580ff0fc6f5cac8463f1770b3d6ec3adcff83275cb482ee6ec2c/kubepods/burstable/pod5a3fe44145ee0a1398b20fd55421e15e/6e379db298a23ff7567f65e8ad9dd12d3b47690b2761eba4275bfbf63cfde451"
	I0927 17:16:57.857709  109085 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c2fbf6b8f191580ff0fc6f5cac8463f1770b3d6ec3adcff83275cb482ee6ec2c/kubepods/burstable/pod5a3fe44145ee0a1398b20fd55421e15e/6e379db298a23ff7567f65e8ad9dd12d3b47690b2761eba4275bfbf63cfde451/freezer.state
	I0927 17:16:57.865543  109085 api_server.go:204] freezer state: "THAWED"
	I0927 17:16:57.865573  109085 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0927 17:16:57.869161  109085 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0927 17:16:57.869186  109085 status.go:456] ha-028020-m03 apiserver status = Running (err=<nil>)
	I0927 17:16:57.869195  109085 status.go:176] ha-028020-m03 status: &{Name:ha-028020-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 17:16:57.869213  109085 status.go:174] checking status of ha-028020-m04 ...
	I0927 17:16:57.869503  109085 cli_runner.go:164] Run: docker container inspect ha-028020-m04 --format={{.State.Status}}
	I0927 17:16:57.886786  109085 status.go:364] ha-028020-m04 host status = "Running" (err=<nil>)
	I0927 17:16:57.886813  109085 host.go:66] Checking if "ha-028020-m04" exists ...
	I0927 17:16:57.887067  109085 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-028020-m04
	I0927 17:16:57.903931  109085 host.go:66] Checking if "ha-028020-m04" exists ...
	I0927 17:16:57.904178  109085 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 17:16:57.904213  109085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-028020-m04
	I0927 17:16:57.920790  109085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/ha-028020-m04/id_rsa Username:docker}
	I0927 17:16:58.004593  109085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 17:16:58.015445  109085 status.go:176] ha-028020-m04 status: &{Name:ha-028020-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-028020 node start m02 -v=7 --alsologtostderr: (36.828159759s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (243.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-028020 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-028020 -v=7 --alsologtostderr
E0927 17:17:37.497505   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:17:58.548090   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:17:58.554529   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:17:58.565979   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:17:58.587401   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:17:58.628846   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:17:58.710297   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:17:58.871847   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:17:59.193651   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:17:59.835706   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:18:01.118003   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:18:03.680333   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:18:08.801953   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-028020 -v=7 --alsologtostderr: (33.75546321s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-028020 --wait=true -v=7 --alsologtostderr
E0927 17:18:19.043259   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:18:39.525383   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:19:20.486969   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:19:53.636262   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:20:21.340707   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:20:42.408780   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-028020 --wait=true -v=7 --alsologtostderr: (3m29.313671116s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-028020
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (243.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-028020 node delete m03 -v=7 --alsologtostderr: (8.599100893s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-028020 stop -v=7 --alsologtostderr: (32.258241818s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-028020 status -v=7 --alsologtostderr: exit status 7 (97.843173ms)

                                                
                                                
-- stdout --
	ha-028020
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-028020-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-028020-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 17:22:22.642605  140595 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:22:22.642717  140595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:22:22.642727  140595 out.go:358] Setting ErrFile to fd 2...
	I0927 17:22:22.642734  140595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:22:22.642921  140595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11000/.minikube/bin
	I0927 17:22:22.643082  140595 out.go:352] Setting JSON to false
	I0927 17:22:22.643107  140595 mustload.go:65] Loading cluster: ha-028020
	I0927 17:22:22.643223  140595 notify.go:220] Checking for updates...
	I0927 17:22:22.643476  140595 config.go:182] Loaded profile config "ha-028020": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 17:22:22.643490  140595 status.go:174] checking status of ha-028020 ...
	I0927 17:22:22.643886  140595 cli_runner.go:164] Run: docker container inspect ha-028020 --format={{.State.Status}}
	I0927 17:22:22.660889  140595 status.go:364] ha-028020 host status = "Stopped" (err=<nil>)
	I0927 17:22:22.660913  140595 status.go:377] host is not running, skipping remaining checks
	I0927 17:22:22.660919  140595 status.go:176] ha-028020 status: &{Name:ha-028020 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 17:22:22.660947  140595 status.go:174] checking status of ha-028020-m02 ...
	I0927 17:22:22.661195  140595 cli_runner.go:164] Run: docker container inspect ha-028020-m02 --format={{.State.Status}}
	I0927 17:22:22.681257  140595 status.go:364] ha-028020-m02 host status = "Stopped" (err=<nil>)
	I0927 17:22:22.681282  140595 status.go:377] host is not running, skipping remaining checks
	I0927 17:22:22.681290  140595 status.go:176] ha-028020-m02 status: &{Name:ha-028020-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 17:22:22.681308  140595 status.go:174] checking status of ha-028020-m04 ...
	I0927 17:22:22.681533  140595 cli_runner.go:164] Run: docker container inspect ha-028020-m04 --format={{.State.Status}}
	I0927 17:22:22.698794  140595 status.go:364] ha-028020-m04 host status = "Stopped" (err=<nil>)
	I0927 17:22:22.698814  140595 status.go:377] host is not running, skipping remaining checks
	I0927 17:22:22.698820  140595 status.go:176] ha-028020-m04 status: &{Name:ha-028020-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (100.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-028020 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0927 17:22:58.547557   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:23:26.250881   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-028020 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m39.401832008s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (100.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (36.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-028020 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-028020 --control-plane -v=7 --alsologtostderr: (35.442705029s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-028020 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (36.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.81s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (23.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-040297 --driver=docker  --container-runtime=docker
E0927 17:24:53.637127   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-040297 --driver=docker  --container-runtime=docker: (23.674867586s)
--- PASS: TestImageBuild/serial/Setup (23.67s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.62s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-040297
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-040297: (2.617167013s)
--- PASS: TestImageBuild/serial/NormalBuild (2.62s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-040297
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.95s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.86s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-040297
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.86s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.76s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-040297
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.76s)

                                                
                                    
x
+
TestJSONOutput/start/Command (65.26s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-549246 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-549246 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m5.262326298s)
--- PASS: TestJSONOutput/start/Command (65.26s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-549246 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.41s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-549246 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.41s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.72s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-549246 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-549246 --output=json --user=testUser: (5.724340186s)
--- PASS: TestJSONOutput/stop/Command (5.72s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-494539 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-494539 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.664639ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8d1ce5a8-f76d-42ed-bf8d-7b9642b1b436","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-494539] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0c02fb40-0790-4322-a788-ac175aad23ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19712"}}
	{"specversion":"1.0","id":"821a712f-a4c4-4212-a90a-107c1baa8dc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"746a74d5-47e4-41d8-9ded-1e56d57ef9f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19712-11000/kubeconfig"}}
	{"specversion":"1.0","id":"53fac4fa-3a27-4a42-b0d1-a8e5c3c504d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11000/.minikube"}}
	{"specversion":"1.0","id":"3178a5eb-4fcf-4ff6-b528-d84b33d42fd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4821c725-468e-4678-b641-486a71f791c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e3ba7ff6-2297-446b-93e9-17a54329d64f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-494539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-494539
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.04s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-203858 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-203858 --network=: (24.074324097s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-203858" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-203858
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-203858: (1.944069721s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.04s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.69s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-010699 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-010699 --network=bridge: (24.757830046s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-010699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-010699
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-010699: (1.913282564s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.69s)

                                                
                                    
x
+
TestKicExistingNetwork (26.31s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0927 17:27:25.546990   17824 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0927 17:27:25.564333   17824 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0927 17:27:25.564410   17824 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0927 17:27:25.564428   17824 cli_runner.go:164] Run: docker network inspect existing-network
W0927 17:27:25.581385   17824 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0927 17:27:25.581411   17824 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0927 17:27:25.581428   17824 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0927 17:27:25.581541   17824 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0927 17:27:25.598828   17824 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-835879e89f73 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:e6:59:d0:15} reservation:<nil>}
I0927 17:27:25.599292   17824 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00224f280}
I0927 17:27:25.599321   17824 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0927 17:27:25.599367   17824 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0927 17:27:25.665552   17824 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-577168 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-577168 --network=existing-network: (24.331000919s)
helpers_test.go:175: Cleaning up "existing-network-577168" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-577168
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-577168: (1.825181727s)
I0927 17:27:51.838126   17824 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (26.31s)

                                                
                                    
x
+
TestKicCustomSubnet (26.33s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-361089 --subnet=192.168.60.0/24
E0927 17:27:58.551240   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-361089 --subnet=192.168.60.0/24: (24.345180805s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-361089 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-361089" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-361089
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-361089: (1.967010281s)
--- PASS: TestKicCustomSubnet (26.33s)

                                                
                                    
x
+
TestKicStaticIP (23.43s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-889933 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-889933 --static-ip=192.168.200.200: (21.315927384s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-889933 ip
helpers_test.go:175: Cleaning up "static-ip-889933" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-889933
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-889933: (1.994526901s)
--- PASS: TestKicStaticIP (23.43s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (48.47s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-370684 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-370684 --driver=docker  --container-runtime=docker: (21.945510829s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-380864 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-380864 --driver=docker  --container-runtime=docker: (21.389804358s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-370684
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-380864
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-380864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-380864
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-380864: (1.951500443s)
helpers_test.go:175: Cleaning up "first-370684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-370684
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-370684: (2.093105439s)
--- PASS: TestMinikubeProfile (48.47s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-949684 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-949684 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.337674967s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-949684 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-961397 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-961397 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.114781595s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-961397 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.45s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-949684 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-949684 --alsologtostderr -v=5: (1.44624772s)
--- PASS: TestMountStart/serial/DeleteFirst (1.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-961397 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-961397
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-961397: (1.17148316s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.76s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-961397
E0927 17:29:53.636907   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-961397: (7.758265398s)
--- PASS: TestMountStart/serial/RestartStopped (8.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-961397 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (76.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-790387 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0927 17:31:16.702617   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-790387 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m16.02284465s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (76.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (37.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-790387 -- rollout status deployment/busybox: (3.343668203s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0927 17:31:21.643039   17824 retry.go:31] will retry after 1.041547712s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0927 17:31:22.795591   17824 retry.go:31] will retry after 1.483994008s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0927 17:31:24.393938   17824 retry.go:31] will retry after 1.906446053s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0927 17:31:26.410971   17824 retry.go:31] will retry after 4.445270876s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0927 17:31:30.964965   17824 retry.go:31] will retry after 3.948583564s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0927 17:31:35.022133   17824 retry.go:31] will retry after 4.728225289s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0927 17:31:39.858501   17824 retry.go:31] will retry after 14.432733013s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- exec busybox-7dff88458-5cfnj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- exec busybox-7dff88458-8zfrg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- exec busybox-7dff88458-5cfnj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- exec busybox-7dff88458-8zfrg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- exec busybox-7dff88458-5cfnj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- exec busybox-7dff88458-8zfrg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (37.51s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- exec busybox-7dff88458-5cfnj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- exec busybox-7dff88458-5cfnj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- exec busybox-7dff88458-8zfrg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-790387 -- exec busybox-7dff88458-8zfrg -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-790387 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-790387 -v 3 --alsologtostderr: (14.817093341s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.40s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-790387 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 cp testdata/cp-test.txt multinode-790387:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 ssh -n multinode-790387 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 cp multinode-790387:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2199368270/001/cp-test_multinode-790387.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 ssh -n multinode-790387 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 cp multinode-790387:/home/docker/cp-test.txt multinode-790387-m02:/home/docker/cp-test_multinode-790387_multinode-790387-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 ssh -n multinode-790387 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 ssh -n multinode-790387-m02 "sudo cat /home/docker/cp-test_multinode-790387_multinode-790387-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 cp multinode-790387:/home/docker/cp-test.txt multinode-790387-m03:/home/docker/cp-test_multinode-790387_multinode-790387-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 ssh -n multinode-790387 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 ssh -n multinode-790387-m03 "sudo cat /home/docker/cp-test_multinode-790387_multinode-790387-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 cp testdata/cp-test.txt multinode-790387-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 ssh -n multinode-790387-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 cp multinode-790387-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2199368270/001/cp-test_multinode-790387-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 ssh -n multinode-790387-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 cp multinode-790387-m02:/home/docker/cp-test.txt multinode-790387:/home/docker/cp-test_multinode-790387-m02_multinode-790387.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 ssh -n multinode-790387-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 ssh -n multinode-790387 "sudo cat /home/docker/cp-test_multinode-790387-m02_multinode-790387.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 cp multinode-790387-m02:/home/docker/cp-test.txt multinode-790387-m03:/home/docker/cp-test_multinode-790387-m02_multinode-790387-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 ssh -n multinode-790387-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 ssh -n multinode-790387-m03 "sudo cat /home/docker/cp-test_multinode-790387-m02_multinode-790387-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 cp testdata/cp-test.txt multinode-790387-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 ssh -n multinode-790387-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 cp multinode-790387-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2199368270/001/cp-test_multinode-790387-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 ssh -n multinode-790387-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 cp multinode-790387-m03:/home/docker/cp-test.txt multinode-790387:/home/docker/cp-test_multinode-790387-m03_multinode-790387.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 ssh -n multinode-790387-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 ssh -n multinode-790387 "sudo cat /home/docker/cp-test_multinode-790387-m03_multinode-790387.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 cp multinode-790387-m03:/home/docker/cp-test.txt multinode-790387-m02:/home/docker/cp-test_multinode-790387-m03_multinode-790387-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 ssh -n multinode-790387-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 ssh -n multinode-790387-m02 "sudo cat /home/docker/cp-test_multinode-790387-m03_multinode-790387-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-790387 node stop m03: (1.170639701s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-790387 status: exit status 7 (436.232714ms)

                                                
                                                
-- stdout --
	multinode-790387
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-790387-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-790387-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-790387 status --alsologtostderr: exit status 7 (441.812743ms)

                                                
                                                
-- stdout --
	multinode-790387
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-790387-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-790387-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 17:32:22.686565  227879 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:32:22.686682  227879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:32:22.686691  227879 out.go:358] Setting ErrFile to fd 2...
	I0927 17:32:22.686695  227879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:32:22.686872  227879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11000/.minikube/bin
	I0927 17:32:22.687032  227879 out.go:352] Setting JSON to false
	I0927 17:32:22.687054  227879 mustload.go:65] Loading cluster: multinode-790387
	I0927 17:32:22.687124  227879 notify.go:220] Checking for updates...
	I0927 17:32:22.687466  227879 config.go:182] Loaded profile config "multinode-790387": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 17:32:22.687484  227879 status.go:174] checking status of multinode-790387 ...
	I0927 17:32:22.687948  227879 cli_runner.go:164] Run: docker container inspect multinode-790387 --format={{.State.Status}}
	I0927 17:32:22.709316  227879 status.go:364] multinode-790387 host status = "Running" (err=<nil>)
	I0927 17:32:22.709370  227879 host.go:66] Checking if "multinode-790387" exists ...
	I0927 17:32:22.709760  227879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-790387
	I0927 17:32:22.729608  227879 host.go:66] Checking if "multinode-790387" exists ...
	I0927 17:32:22.729886  227879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 17:32:22.729934  227879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-790387
	I0927 17:32:22.747368  227879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/multinode-790387/id_rsa Username:docker}
	I0927 17:32:22.828941  227879 ssh_runner.go:195] Run: systemctl --version
	I0927 17:32:22.833395  227879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 17:32:22.843497  227879 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 17:32:22.892165  227879 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-27 17:32:22.882976015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647923200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0927 17:32:22.892785  227879 kubeconfig.go:125] found "multinode-790387" server: "https://192.168.67.2:8443"
	I0927 17:32:22.892820  227879 api_server.go:166] Checking apiserver status ...
	I0927 17:32:22.892866  227879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 17:32:22.903882  227879 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2335/cgroup
	I0927 17:32:22.912339  227879 api_server.go:182] apiserver freezer: "12:freezer:/docker/6d4f7e4695e73bf6b05416469f0aa29c600d32ca4965e6ff11a462a9928a3d9a/kubepods/burstable/podf8b56037e94a6275221a812d7ad9e4dd/1e958d4b80a0ee141cda9d3b2370f95c9631eb4f77cbd28c4f2e3bad8996c63b"
	I0927 17:32:22.912436  227879 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6d4f7e4695e73bf6b05416469f0aa29c600d32ca4965e6ff11a462a9928a3d9a/kubepods/burstable/podf8b56037e94a6275221a812d7ad9e4dd/1e958d4b80a0ee141cda9d3b2370f95c9631eb4f77cbd28c4f2e3bad8996c63b/freezer.state
	I0927 17:32:22.919994  227879 api_server.go:204] freezer state: "THAWED"
	I0927 17:32:22.920025  227879 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0927 17:32:22.923741  227879 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0927 17:32:22.923769  227879 status.go:456] multinode-790387 apiserver status = Running (err=<nil>)
	I0927 17:32:22.923800  227879 status.go:176] multinode-790387 status: &{Name:multinode-790387 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 17:32:22.923817  227879 status.go:174] checking status of multinode-790387-m02 ...
	I0927 17:32:22.924131  227879 cli_runner.go:164] Run: docker container inspect multinode-790387-m02 --format={{.State.Status}}
	I0927 17:32:22.942073  227879 status.go:364] multinode-790387-m02 host status = "Running" (err=<nil>)
	I0927 17:32:22.942098  227879 host.go:66] Checking if "multinode-790387-m02" exists ...
	I0927 17:32:22.942333  227879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-790387-m02
	I0927 17:32:22.959117  227879 host.go:66] Checking if "multinode-790387-m02" exists ...
	I0927 17:32:22.959372  227879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 17:32:22.959407  227879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-790387-m02
	I0927 17:32:22.975714  227879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19712-11000/.minikube/machines/multinode-790387-m02/id_rsa Username:docker}
	I0927 17:32:23.056559  227879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 17:32:23.066574  227879 status.go:176] multinode-790387-m02 status: &{Name:multinode-790387-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0927 17:32:23.066619  227879 status.go:174] checking status of multinode-790387-m03 ...
	I0927 17:32:23.066950  227879 cli_runner.go:164] Run: docker container inspect multinode-790387-m03 --format={{.State.Status}}
	I0927 17:32:23.083469  227879 status.go:364] multinode-790387-m03 host status = "Stopped" (err=<nil>)
	I0927 17:32:23.083498  227879 status.go:377] host is not running, skipping remaining checks
	I0927 17:32:23.083506  227879 status.go:176] multinode-790387-m03 status: &{Name:multinode-790387-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.05s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-790387 node start m03 -v=7 --alsologtostderr: (8.956388176s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.59s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (95.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-790387
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-790387
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-790387: (22.223059255s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-790387 --wait=true -v=8 --alsologtostderr
E0927 17:32:58.548047   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-790387 --wait=true -v=8 --alsologtostderr: (1m13.369471298s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-790387
--- PASS: TestMultiNode/serial/RestartKeepsNodes (95.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-790387 node delete m03: (4.622763566s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 stop
E0927 17:34:21.614789   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-790387 stop: (21.249318887s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-790387 status: exit status 7 (78.199938ms)

                                                
                                                
-- stdout --
	multinode-790387
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-790387-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-790387 status --alsologtostderr: exit status 7 (77.966835ms)

                                                
                                                
-- stdout --
	multinode-790387
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-790387-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 17:34:34.891244  243112 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:34:34.891512  243112 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:34:34.891523  243112 out.go:358] Setting ErrFile to fd 2...
	I0927 17:34:34.891527  243112 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:34:34.891692  243112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11000/.minikube/bin
	I0927 17:34:34.891893  243112 out.go:352] Setting JSON to false
	I0927 17:34:34.891921  243112 mustload.go:65] Loading cluster: multinode-790387
	I0927 17:34:34.892027  243112 notify.go:220] Checking for updates...
	I0927 17:34:34.892302  243112 config.go:182] Loaded profile config "multinode-790387": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 17:34:34.892318  243112 status.go:174] checking status of multinode-790387 ...
	I0927 17:34:34.892708  243112 cli_runner.go:164] Run: docker container inspect multinode-790387 --format={{.State.Status}}
	I0927 17:34:34.909431  243112 status.go:364] multinode-790387 host status = "Stopped" (err=<nil>)
	I0927 17:34:34.909480  243112 status.go:377] host is not running, skipping remaining checks
	I0927 17:34:34.909487  243112 status.go:176] multinode-790387 status: &{Name:multinode-790387 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 17:34:34.909517  243112 status.go:174] checking status of multinode-790387-m02 ...
	I0927 17:34:34.909782  243112 cli_runner.go:164] Run: docker container inspect multinode-790387-m02 --format={{.State.Status}}
	I0927 17:34:34.927574  243112 status.go:364] multinode-790387-m02 host status = "Stopped" (err=<nil>)
	I0927 17:34:34.927605  243112 status.go:377] host is not running, skipping remaining checks
	I0927 17:34:34.927612  243112 status.go:176] multinode-790387-m02 status: &{Name:multinode-790387-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-790387 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0927 17:34:53.636691   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-790387 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (53.431245927s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-790387 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.96s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-790387
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-790387-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-790387-m02 --driver=docker  --container-runtime=docker: exit status 14 (64.095302ms)

                                                
                                                
-- stdout --
	* [multinode-790387-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19712
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19712-11000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-790387-m02' is duplicated with machine name 'multinode-790387-m02' in profile 'multinode-790387'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-790387-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-790387-m03 --driver=docker  --container-runtime=docker: (24.229866615s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-790387
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-790387: exit status 80 (250.652908ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-790387 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-790387-m03 already exists in multinode-790387-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-790387-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-790387-m03: (2.007855105s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.60s)

                                                
                                    
x
+
TestPreload (155.77s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-048954 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-048954 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m30.716778824s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-048954 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-048954 image pull gcr.io/k8s-minikube/busybox: (2.351666333s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-048954
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-048954: (10.611198805s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-048954 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E0927 17:37:58.551158   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-048954 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (49.784891895s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-048954 image list
helpers_test.go:175: Cleaning up "test-preload-048954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-048954
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-048954: (2.110207321s)
--- PASS: TestPreload (155.77s)

                                                
                                    
x
+
TestScheduledStopUnix (97.4s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-780301 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-780301 --memory=2048 --driver=docker  --container-runtime=docker: (24.525350083s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-780301 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-780301 -n scheduled-stop-780301
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-780301 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0927 17:38:59.894286   17824 retry.go:31] will retry after 60.669µs: open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/scheduled-stop-780301/pid: no such file or directory
I0927 17:38:59.895452   17824 retry.go:31] will retry after 148.531µs: open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/scheduled-stop-780301/pid: no such file or directory
I0927 17:38:59.896589   17824 retry.go:31] will retry after 286.638µs: open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/scheduled-stop-780301/pid: no such file or directory
I0927 17:38:59.897744   17824 retry.go:31] will retry after 250.036µs: open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/scheduled-stop-780301/pid: no such file or directory
I0927 17:38:59.898871   17824 retry.go:31] will retry after 624.27µs: open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/scheduled-stop-780301/pid: no such file or directory
I0927 17:38:59.900041   17824 retry.go:31] will retry after 1.082256ms: open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/scheduled-stop-780301/pid: no such file or directory
I0927 17:38:59.901186   17824 retry.go:31] will retry after 1.237804ms: open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/scheduled-stop-780301/pid: no such file or directory
I0927 17:38:59.903401   17824 retry.go:31] will retry after 1.378576ms: open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/scheduled-stop-780301/pid: no such file or directory
I0927 17:38:59.905596   17824 retry.go:31] will retry after 2.128387ms: open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/scheduled-stop-780301/pid: no such file or directory
I0927 17:38:59.908809   17824 retry.go:31] will retry after 4.261079ms: open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/scheduled-stop-780301/pid: no such file or directory
I0927 17:38:59.914022   17824 retry.go:31] will retry after 5.199343ms: open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/scheduled-stop-780301/pid: no such file or directory
I0927 17:38:59.920279   17824 retry.go:31] will retry after 8.339447ms: open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/scheduled-stop-780301/pid: no such file or directory
I0927 17:38:59.929530   17824 retry.go:31] will retry after 10.622829ms: open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/scheduled-stop-780301/pid: no such file or directory
I0927 17:38:59.940796   17824 retry.go:31] will retry after 25.13724ms: open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/scheduled-stop-780301/pid: no such file or directory
I0927 17:38:59.967052   17824 retry.go:31] will retry after 43.468782ms: open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/scheduled-stop-780301/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-780301 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-780301 -n scheduled-stop-780301
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-780301
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-780301 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0927 17:39:53.639664   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-780301
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-780301: exit status 7 (64.227574ms)

                                                
                                                
-- stdout --
	scheduled-stop-780301
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-780301 -n scheduled-stop-780301
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-780301 -n scheduled-stop-780301: exit status 7 (60.85897ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-780301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-780301
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-780301: (1.610112313s)
--- PASS: TestScheduledStopUnix (97.40s)

                                                
                                    
x
+
TestSkaffold (102.5s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1714795586 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-626241 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-626241 --memory=2600 --driver=docker  --container-runtime=docker: (20.867955297s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1714795586 run --minikube-profile skaffold-626241 --kube-context skaffold-626241 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1714795586 run --minikube-profile skaffold-626241 --kube-context skaffold-626241 --status-check=true --port-forward=false --interactive=false: (1m4.994853581s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6f664877cc-xsjfh" [11a933fa-d208-4ae9-ad11-06ca82b19fc9] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003158424s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6685c9dfdb-g5f8g" [2d174f77-af4c-4c7c-b7df-9e6e5d1694df] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003281402s
helpers_test.go:175: Cleaning up "skaffold-626241" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-626241
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-626241: (2.762906049s)
--- PASS: TestSkaffold (102.50s)

                                                
                                    
x
+
TestInsufficientStorage (12.6s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-256500 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-256500 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.49443949s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"51067717-2895-4f8c-9f2b-4af1ee0c91c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-256500] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c1b274fc-e245-43c2-9ca2-5a97e004057e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19712"}}
	{"specversion":"1.0","id":"c62acc4d-4274-44d7-8798-d2fc583119b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d6cb05dc-e913-4fbd-aa3c-19cf88c6f9b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19712-11000/kubeconfig"}}
	{"specversion":"1.0","id":"dea3350e-4d14-4849-bdef-4fd2267f4c03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11000/.minikube"}}
	{"specversion":"1.0","id":"6d134d00-1c0a-4776-a3c1-84be13b682cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4a06dca3-e1a6-4c1a-84bf-0b6d17e5fd60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ff299566-e632-42bd-b836-63f5a81ad397","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a7520e57-c5ae-4ac4-a509-7a49090c5881","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"1d68710e-833d-4c96-a93d-8256fee16bde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"23ad0aaf-7a21-4c8c-84ef-63a5aad81d38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"af776d46-2b21-4956-a86e-6ae603bd9d7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-256500\" primary control-plane node in \"insufficient-storage-256500\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7f514044-37ec-4c3e-82e6-94ad14232f34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727108449-19696 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f8491522-ba3c-4522-82c5-adb994736d2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ad4c2e84-b8c8-4aa5-9e45-09346ef74de7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-256500 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-256500 --output=json --layout=cluster: exit status 7 (244.565053ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-256500","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-256500","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 17:42:05.610256  283496 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-256500" does not appear in /home/jenkins/minikube-integration/19712-11000/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-256500 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-256500 --output=json --layout=cluster: exit status 7 (242.122277ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-256500","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-256500","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 17:42:05.853205  283596 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-256500" does not appear in /home/jenkins/minikube-integration/19712-11000/kubeconfig
	E0927 17:42:05.862739  283596 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/insufficient-storage-256500/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-256500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-256500
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-256500: (1.618668534s)
--- PASS: TestInsufficientStorage (12.60s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (79.56s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1563561648 start -p running-upgrade-853514 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1563561648 start -p running-upgrade-853514 --memory=2200 --vm-driver=docker  --container-runtime=docker: (34.698601294s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-853514 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-853514 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (40.542903129s)
helpers_test.go:175: Cleaning up "running-upgrade-853514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-853514
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-853514: (2.146204464s)
--- PASS: TestRunningBinaryUpgrade (79.56s)

                                                
                                    
x
+
TestKubernetesUpgrade (343.55s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-857251 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-857251 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (39.197291673s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-857251
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-857251: (11.923162983s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-857251 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-857251 status --format={{.Host}}: exit status 7 (65.530476ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-857251 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-857251 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m30.632510525s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-857251 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-857251 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-857251 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (67.349218ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-857251] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19712
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19712-11000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-857251
	    minikube start -p kubernetes-upgrade-857251 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8572512 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-857251 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-857251 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-857251 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (19.311817615s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-857251" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-857251
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-857251: (2.297153094s)
--- PASS: TestKubernetesUpgrade (343.55s)

                                                
                                    
x
+
TestMissingContainerUpgrade (162.16s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2391927026 start -p missing-upgrade-687634 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2391927026 start -p missing-upgrade-687634 --memory=2200 --driver=docker  --container-runtime=docker: (1m33.091002775s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-687634
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-687634: (10.43215047s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-687634
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-687634 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-687634 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (54.055081528s)
helpers_test.go:175: Cleaning up "missing-upgrade-687634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-687634
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-687634: (2.124148212s)
--- PASS: TestMissingContainerUpgrade (162.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-278754 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-278754 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (88.631099ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-278754] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19712
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19712-11000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (34.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-278754 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-278754 --driver=docker  --container-runtime=docker: (34.000591958s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-278754 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (34.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-278754 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-278754 --no-kubernetes --driver=docker  --container-runtime=docker: (14.870808581s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-278754 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-278754 status -o json: exit status 2 (263.125262ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-278754","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-278754
E0927 17:42:58.547163   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-278754: (1.694832757s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (147.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3983866782 start -p stopped-upgrade-916644 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3983866782 start -p stopped-upgrade-916644 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m51.969688876s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3983866782 -p stopped-upgrade-916644 stop
E0927 17:44:53.637548   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3983866782 -p stopped-upgrade-916644 stop: (10.647573559s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-916644 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-916644 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (25.371625051s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (147.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-278754 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-278754 --no-kubernetes --driver=docker  --container-runtime=docker: (7.5380273s)
--- PASS: TestNoKubernetes/serial/Start (7.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-278754 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-278754 "sudo systemctl is-active --quiet service kubelet": exit status 1 (246.53036ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-278754
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-278754: (1.190744114s)
--- PASS: TestNoKubernetes/serial/Stop (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-278754 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-278754 --driver=docker  --container-runtime=docker: (7.824284021s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-278754 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-278754 "sudo systemctl is-active --quiet service kubelet": exit status 1 (231.240846ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-916644
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-916644: (1.698711212s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.70s)

                                                
                                    
x
+
TestPause/serial/Start (72.3s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-817726 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-817726 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m12.299550671s)
--- PASS: TestPause/serial/Start (72.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (150.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-994231 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0927 17:46:46.234451   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/skaffold-626241/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:46:51.356375   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/skaffold-626241/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-994231 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m30.414442475s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (150.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (44.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-015062 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0927 17:47:01.598599   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/skaffold-626241/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-015062 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (44.665012886s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (44.67s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (33.84s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-817726 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0927 17:47:22.080384   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/skaffold-626241/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-817726 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (33.827147241s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (33.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-015062 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [57cc17c3-6b42-4ac1-b0d9-ecbf96c62462] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [57cc17c3-6b42-4ac1-b0d9-ecbf96c62462] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003439238s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-015062 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestPause/serial/Pause (0.53s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-817726 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.53s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-817726 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-817726 --output=json --layout=cluster: exit status 2 (280.869798ms)

                                                
                                                
-- stdout --
	{"Name":"pause-817726","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-817726","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.45s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-817726 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.45s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.67s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-817726 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.67s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.07s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-817726 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-817726 --alsologtostderr -v=5: (2.071843562s)
--- PASS: TestPause/serial/DeletePaused (2.07s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.75s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-817726
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-817726: exit status 1 (16.914031ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-817726: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-015062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-015062 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-527051 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-527051 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m8.327484885s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-015062 --alsologtostderr -v=3
E0927 17:47:56.704046   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:47:58.548015   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:48:03.042530   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/skaffold-626241/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-015062 --alsologtostderr -v=3: (10.85323791s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-015062 -n no-preload-015062
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-015062 -n no-preload-015062: exit status 7 (114.171483ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-015062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (263.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-015062 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-015062 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m22.750268037s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-015062 -n no-preload-015062
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (263.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-527051 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [25398728-21ad-4aa7-8c47-3dee8a7f263b] Pending
helpers_test.go:344: "busybox" [25398728-21ad-4aa7-8c47-3dee8a7f263b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [25398728-21ad-4aa7-8c47-3dee8a7f263b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003792429s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-527051 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (29.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-322998 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-322998 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (29.876281918s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (29.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-527051 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-527051 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-527051 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-527051 --alsologtostderr -v=3: (10.673647376s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-994231 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9a4c95e5-b1b2-4fca-88a1-694cc59a4610] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9a4c95e5-b1b2-4fca-88a1-694cc59a4610] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003977797s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-994231 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-527051 -n default-k8s-diff-port-527051
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-527051 -n default-k8s-diff-port-527051: exit status 7 (99.85144ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-527051 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-527051 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0927 17:49:24.964444   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/skaffold-626241/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-527051 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m23.524964242s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-527051 -n default-k8s-diff-port-527051
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-994231 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-994231 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-994231 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-994231 --alsologtostderr -v=3: (11.117921958s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-322998 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-322998 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.101538198s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-322998 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-322998 --alsologtostderr -v=3: (5.767724396s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-994231 -n old-k8s-version-994231
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-994231 -n old-k8s-version-994231: exit status 7 (99.404355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-994231 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (137.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-994231 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-994231 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m17.243950797s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-994231 -n old-k8s-version-994231
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (137.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-322998 -n newest-cni-322998
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-322998 -n newest-cni-322998: exit status 7 (120.725797ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-322998 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-322998 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0927 17:49:53.636515   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-322998 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (16.776560402s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-322998 -n newest-cni-322998
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-322998 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-322998 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-322998 -n newest-cni-322998
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-322998 -n newest-cni-322998: exit status 2 (304.906188ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-322998 -n newest-cni-322998
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-322998 -n newest-cni-322998: exit status 2 (300.114858ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-322998 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-322998 -n newest-cni-322998
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-322998 -n newest-cni-322998
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (69.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-031274 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0927 17:51:01.616189   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-031274 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m9.79569055s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (69.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-031274 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [451f8fb0-7077-49a3-aa50-7985c4eb88b0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [451f8fb0-7077-49a3-aa50-7985c4eb88b0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004022219s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-031274 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-031274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-031274 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-031274 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-031274 --alsologtostderr -v=3: (10.741845586s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-031274 -n embed-certs-031274
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-031274 -n embed-certs-031274: exit status 7 (97.749324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-031274 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (263.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-031274 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0927 17:51:41.102910   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/skaffold-626241/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-031274 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m23.508139856s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-031274 -n embed-certs-031274
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (263.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bcx2b" [56fa0d22-3c37-4149-9c4d-18ec9b6ba3e3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00451s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bcx2b" [56fa0d22-3c37-4149-9c4d-18ec9b6ba3e3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004804037s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-994231 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-994231 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-994231 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-994231 -n old-k8s-version-994231
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-994231 -n old-k8s-version-994231: exit status 2 (268.997579ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-994231 -n old-k8s-version-994231
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-994231 -n old-k8s-version-994231: exit status 2 (286.618673ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-994231 --alsologtostderr -v=1
E0927 17:52:08.806226   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/skaffold-626241/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-994231 -n old-k8s-version-994231
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-994231 -n old-k8s-version-994231
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (71.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-743431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-743431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m11.160011594s)
--- PASS: TestNetworkPlugins/group/auto/Start (71.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jg4xh" [33504283-f6a8-4b5b-957f-1b9c7c7eb47a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003776203s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jg4xh" [33504283-f6a8-4b5b-957f-1b9c7c7eb47a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004341026s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-015062 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-015062 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-015062 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-015062 -n no-preload-015062
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-015062 -n no-preload-015062: exit status 2 (293.086823ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-015062 -n no-preload-015062
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-015062 -n no-preload-015062: exit status 2 (283.39528ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-015062 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-015062 -n no-preload-015062
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-015062 -n no-preload-015062
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (57.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-743431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0927 17:52:44.462513   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/no-preload-015062/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:52:44.468942   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/no-preload-015062/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:52:44.480325   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/no-preload-015062/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:52:44.501754   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/no-preload-015062/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:52:44.543190   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/no-preload-015062/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:52:44.624700   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/no-preload-015062/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:52:44.786524   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/no-preload-015062/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:52:45.108231   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/no-preload-015062/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:52:45.749738   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/no-preload-015062/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:52:47.031008   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/no-preload-015062/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:52:49.592789   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/no-preload-015062/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:52:54.714662   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/no-preload-015062/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:52:58.547692   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/functional-712810/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:53:04.956974   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/no-preload-015062/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-743431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (57.166917932s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (57.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-743431 "pgrep -a kubelet"
I0927 17:53:23.853499   17824 config.go:182] Loaded profile config "auto-743431": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-743431 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rhhlf" [242c61f3-eaa2-4645-8765-3e0706191006] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0927 17:53:25.438242   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/no-preload-015062/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-rhhlf" [242c61f3-eaa2-4645-8765-3e0706191006] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004192789s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-743431 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-743431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-743431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7ghn5" [7f3d3616-fce4-4419-ba6e-d84b5de640e7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.018261848s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-743431 "pgrep -a kubelet"
I0927 17:53:46.222723   17824 config.go:182] Loaded profile config "kindnet-743431": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-743431 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vpcj7" [f6ca1f3d-7c2e-4eac-a2c5-063151e76cbd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vpcj7" [f6ca1f3d-7c2e-4eac-a2c5-063151e76cbd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003490834s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-x86q2" [80633cd2-c6fa-4b08-82e4-588d882dfab8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003710941s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (65.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-743431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-743431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m5.48590655s)
--- PASS: TestNetworkPlugins/group/calico/Start (65.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-x86q2" [80633cd2-c6fa-4b08-82e4-588d882dfab8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003925982s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-527051 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-743431 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-743431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-743431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-527051 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-527051 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-527051 -n default-k8s-diff-port-527051
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-527051 -n default-k8s-diff-port-527051: exit status 2 (298.404068ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-527051 -n default-k8s-diff-port-527051
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-527051 -n default-k8s-diff-port-527051: exit status 2 (286.356552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-527051 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-527051 -n default-k8s-diff-port-527051
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-527051 -n default-k8s-diff-port-527051
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (48.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-743431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0927 17:54:06.399564   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/no-preload-015062/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-743431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (48.123864845s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (48.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (70.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-743431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E0927 17:54:16.580141   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/old-k8s-version-994231/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:54:16.586532   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/old-k8s-version-994231/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:54:16.597849   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/old-k8s-version-994231/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:54:16.620464   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/old-k8s-version-994231/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:54:16.663980   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/old-k8s-version-994231/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:54:16.746205   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/old-k8s-version-994231/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:54:16.907960   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/old-k8s-version-994231/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:54:17.229719   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/old-k8s-version-994231/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:54:17.871432   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/old-k8s-version-994231/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:54:19.152828   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/old-k8s-version-994231/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:54:21.714922   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/old-k8s-version-994231/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:54:26.840010   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/old-k8s-version-994231/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:54:37.081731   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/old-k8s-version-994231/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-743431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m10.425963382s)
--- PASS: TestNetworkPlugins/group/false/Start (70.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-743431 "pgrep -a kubelet"
I0927 17:54:53.001174   17824 config.go:182] Loaded profile config "custom-flannel-743431": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-743431 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cprv4" [cda8c097-bd17-4bcf-853c-5b932f107232] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0927 17:54:53.637132   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/addons-393052/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-cprv4" [cda8c097-bd17-4bcf-853c-5b932f107232] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004126003s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-bbngv" [5bb4aa51-6b20-4a3a-af2e-2301ba1f0142] Running
E0927 17:54:57.563670   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/old-k8s-version-994231/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004226834s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-743431 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-743431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-743431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-743431 "pgrep -a kubelet"
I0927 17:55:03.620816   17824 config.go:182] Loaded profile config "calico-743431": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-743431 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-f2qc6" [494b6876-6b4a-4d0d-ac50-41c7fb5c0e38] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-f2qc6" [494b6876-6b4a-4d0d-ac50-41c7fb5c0e38] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003852025s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-743431 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-743431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-743431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-743431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-743431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m11.712005889s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-743431 "pgrep -a kubelet"
I0927 17:55:27.288907   17824 config.go:182] Loaded profile config "false-743431": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-743431 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jc85q" [cc58734e-e561-4ac8-9775-24e3ba551534] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0927 17:55:28.321369   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/no-preload-015062/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-jc85q" [cc58734e-e561-4ac8-9775-24e3ba551534] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.004366458s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (47.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-743431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-743431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (47.864966139s)
--- PASS: TestNetworkPlugins/group/flannel/Start (47.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-743431 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-743431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-743431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (43.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-743431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-743431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (43.312470839s)
--- PASS: TestNetworkPlugins/group/bridge/Start (43.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-c9z7c" [eae79a7e-5bf6-4295-9cd8-42819be7d57f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.053475735s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-c9z7c" [eae79a7e-5bf6-4295-9cd8-42819be7d57f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005024399s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-031274 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-031274 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-031274 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-031274 -n embed-certs-031274
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-031274 -n embed-certs-031274: exit status 2 (289.300319ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-031274 -n embed-certs-031274
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-031274 -n embed-certs-031274: exit status 2 (295.219284ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-031274 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-031274 -n embed-certs-031274
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-031274 -n embed-certs-031274
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.42s)
E0927 17:57:00.447165   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/old-k8s-version-994231/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (67.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-743431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-743431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m7.732955599s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (67.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-jk7gt" [236788a5-d3ff-4beb-be3b-9bb65e5727e9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004956073s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-743431 "pgrep -a kubelet"
I0927 17:56:28.305414   17824 config.go:182] Loaded profile config "flannel-743431": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-743431 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fq7zf" [6426f625-df53-4b5e-833c-c50371c05eb5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fq7zf" [6426f625-df53-4b5e-833c-c50371c05eb5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003620817s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-743431 "pgrep -a kubelet"
I0927 17:56:32.121559   17824 config.go:182] Loaded profile config "enable-default-cni-743431": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-743431 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fp7m2" [d2230b44-49fb-48e9-963c-15062242bef3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fp7m2" [d2230b44-49fb-48e9-963c-15062242bef3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.00379885s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-743431 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-743431 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-743431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-743431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
I0927 17:56:39.848439   17824 config.go:182] Loaded profile config "bridge-743431": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-743431 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hl5fq" [7ec733b7-0b61-4e57-bf19-bd54a999b1f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0927 17:56:41.102100   17824 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/skaffold-626241/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-hl5fq" [7ec733b7-0b61-4e57-bf19-bd54a999b1f3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004956339s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-743431 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-743431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-743431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-743431 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-743431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-743431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-743431 "pgrep -a kubelet"
I0927 17:57:20.739331   17824 config.go:182] Loaded profile config "kubenet-743431": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-743431 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-b97js" [230484f4-0e18-44c8-ba92-73e207f9d390] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-b97js" [230484f4-0e18-44c8-ba92-73e207f9d390] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.004549273s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-743431 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-743431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-743431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)

                                                
                                    

Test skip (20/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-222605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-222605
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-743431 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-743431

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-743431

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-743431

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-743431

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-743431

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-743431

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-743431

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-743431

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-743431

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-743431

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-743431

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-743431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-743431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-743431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-743431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-743431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-743431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-743431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-743431" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-743431

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-743431

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-743431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-743431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-743431

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-743431

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-743431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-743431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-743431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-743431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-743431" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19712-11000/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 27 Sep 2024 17:45:56 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-381364
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19712-11000/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 27 Sep 2024 17:44:26 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-857251
contexts:
- context:
cluster: cert-expiration-381364
extensions:
- extension:
last-update: Fri, 27 Sep 2024 17:45:56 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-381364
name: cert-expiration-381364
- context:
cluster: kubernetes-upgrade-857251
user: kubernetes-upgrade-857251
name: kubernetes-upgrade-857251
current-context: cert-expiration-381364
kind: Config
preferences: {}
users:
- name: cert-expiration-381364
user:
client-certificate: /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/cert-expiration-381364/client.crt
client-key: /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/cert-expiration-381364/client.key
- name: kubernetes-upgrade-857251
user:
client-certificate: /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/kubernetes-upgrade-857251/client.crt
client-key: /home/jenkins/minikube-integration/19712-11000/.minikube/profiles/kubernetes-upgrade-857251/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-743431

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-743431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-743431"

                                                
                                                
----------------------- debugLogs end: cilium-743431 [took: 3.711495571s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-743431" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-743431
--- SKIP: TestNetworkPlugins/group/cilium (3.87s)

                                                
                                    
Copied to clipboard