Test Report: Docker_Linux 19672

                    
                      d6d2a37830b251a8a712eec07ee86a534797346d:2024-09-20:36302
                    
                

Test fail (1/342)

Order failed test Duration
33 TestAddons/parallel/Registry 73.3
x
+
TestAddons/parallel/Registry (73.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.526654ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-n8x7q" [ab1423e5-b667-4a7f-96f5-061bb4596eeb] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00269578s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8z8jc" [db35f6da-74a1-46c1-8ca2-4c7e51bf1986] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002843702s
addons_test.go:338: (dbg) Run:  kubectl --context addons-135472 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-135472 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-135472 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.076617631s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-135472 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-135472 ip
2024/09/20 21:01:09 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-135472 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-135472
helpers_test.go:235: (dbg) docker inspect addons-135472:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f41c27280e30b093d965dee05d6cd1ed653e9ef1b0cd6d7ffb9de2a1c20deea3",
	        "Created": "2024-09-20T20:48:12.857258893Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 18381,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T20:48:12.975780562Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d94335c0cd164ddebb3c5158e317bcf6d2e08dc08f448d25251f425acb842829",
	        "ResolvConfPath": "/var/lib/docker/containers/f41c27280e30b093d965dee05d6cd1ed653e9ef1b0cd6d7ffb9de2a1c20deea3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f41c27280e30b093d965dee05d6cd1ed653e9ef1b0cd6d7ffb9de2a1c20deea3/hostname",
	        "HostsPath": "/var/lib/docker/containers/f41c27280e30b093d965dee05d6cd1ed653e9ef1b0cd6d7ffb9de2a1c20deea3/hosts",
	        "LogPath": "/var/lib/docker/containers/f41c27280e30b093d965dee05d6cd1ed653e9ef1b0cd6d7ffb9de2a1c20deea3/f41c27280e30b093d965dee05d6cd1ed653e9ef1b0cd6d7ffb9de2a1c20deea3-json.log",
	        "Name": "/addons-135472",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-135472:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-135472",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f209ae95c12dac22375c1b9ef598d41be2892650e39d7191bcc99fe15053ed8f-init/diff:/var/lib/docker/overlay2/467fd240a27e84496133f634bb50855964a0d9e03013662bcf99182d5b8fdb59/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f209ae95c12dac22375c1b9ef598d41be2892650e39d7191bcc99fe15053ed8f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f209ae95c12dac22375c1b9ef598d41be2892650e39d7191bcc99fe15053ed8f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f209ae95c12dac22375c1b9ef598d41be2892650e39d7191bcc99fe15053ed8f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-135472",
	                "Source": "/var/lib/docker/volumes/addons-135472/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-135472",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-135472",
	                "name.minikube.sigs.k8s.io": "addons-135472",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dabe47a7a3640720971594b9815f336bc70b32d3422bfc75d6c5ffb81de3cc31",
	            "SandboxKey": "/var/run/docker/netns/dabe47a7a364",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-135472": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "5df1c68f89b4f25ea932b34d898870b97f752788af0a167f520f95df7e91872e",
	                    "EndpointID": "0417389d16baf5cbfb2cad117eb13036b2da22003d59024e78bb5b9edf7bbef7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-135472",
	                        "f41c27280e30"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-135472 -n addons-135472
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-135472 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-003803 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | download-docker-003803                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-003803                                                                   | download-docker-003803 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-574210   | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | binary-mirror-574210                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39611                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-574210                                                                     | binary-mirror-574210   | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| addons  | disable dashboard -p                                                                        | addons-135472          | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | addons-135472                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-135472          | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | addons-135472                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-135472 --wait=true                                                                | addons-135472          | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:51 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-135472 addons disable                                                                | addons-135472          | jenkins | v1.34.0 | 20 Sep 24 20:51 UTC | 20 Sep 24 20:51 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-135472 addons disable                                                                | addons-135472          | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-135472 addons                                                                        | addons-135472          | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-135472          | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | -p addons-135472                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-135472          | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | addons-135472                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-135472          | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | -p addons-135472                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-135472 ssh cat                                                                       | addons-135472          | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | /opt/local-path-provisioner/pvc-1b91b000-5e84-4be3-a317-9707e25013f8_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-135472 addons disable                                                                | addons-135472          | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-135472          | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | addons-135472                                                                               |                        |         |         |                     |                     |
	| addons  | addons-135472 addons disable                                                                | addons-135472          | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-135472 ssh curl -s                                                                   | addons-135472          | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-135472 ip                                                                            | addons-135472          | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	| addons  | addons-135472 addons disable                                                                | addons-135472          | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-135472 addons                                                                        | addons-135472          | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-135472 addons disable                                                                | addons-135472          | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-135472 addons                                                                        | addons-135472          | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-135472 ip                                                                            | addons-135472          | jenkins | v1.34.0 | 20 Sep 24 21:01 UTC | 20 Sep 24 21:01 UTC |
	| addons  | addons-135472 addons disable                                                                | addons-135472          | jenkins | v1.34.0 | 20 Sep 24 21:01 UTC | 20 Sep 24 21:01 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 20:47:49
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 20:47:49.595195   17609 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:47:49.595274   17609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:49.595279   17609 out.go:358] Setting ErrFile to fd 2...
	I0920 20:47:49.595283   17609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:49.595446   17609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9514/.minikube/bin
	I0920 20:47:49.595955   17609 out.go:352] Setting JSON to false
	I0920 20:47:49.596718   17609 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1818,"bootTime":1726863452,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 20:47:49.596798   17609 start.go:139] virtualization: kvm guest
	I0920 20:47:49.598829   17609 out.go:177] * [addons-135472] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 20:47:49.599991   17609 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 20:47:49.600004   17609 notify.go:220] Checking for updates...
	I0920 20:47:49.602178   17609 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 20:47:49.603414   17609 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9514/kubeconfig
	I0920 20:47:49.604817   17609 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9514/.minikube
	I0920 20:47:49.606041   17609 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 20:47:49.607143   17609 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 20:47:49.608283   17609 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 20:47:49.628337   17609 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 20:47:49.628424   17609 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 20:47:49.668264   17609 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 20:47:49.660396796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 20:47:49.668394   17609 docker.go:318] overlay module found
	I0920 20:47:49.670132   17609 out.go:177] * Using the docker driver based on user configuration
	I0920 20:47:49.671221   17609 start.go:297] selected driver: docker
	I0920 20:47:49.671233   17609 start.go:901] validating driver "docker" against <nil>
	I0920 20:47:49.671247   17609 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 20:47:49.671967   17609 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 20:47:49.711685   17609 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 20:47:49.704083279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 20:47:49.711852   17609 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 20:47:49.712071   17609 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 20:47:49.713550   17609 out.go:177] * Using Docker driver with root privileges
	I0920 20:47:49.714804   17609 cni.go:84] Creating CNI manager for ""
	I0920 20:47:49.714867   17609 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 20:47:49.714880   17609 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 20:47:49.714945   17609 start.go:340] cluster config:
	{Name:addons-135472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-135472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 20:47:49.716124   17609 out.go:177] * Starting "addons-135472" primary control-plane node in "addons-135472" cluster
	I0920 20:47:49.717136   17609 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 20:47:49.718241   17609 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0920 20:47:49.719194   17609 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 20:47:49.719221   17609 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0920 20:47:49.719232   17609 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9514/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0920 20:47:49.719239   17609 cache.go:56] Caching tarball of preloaded images
	I0920 20:47:49.719319   17609 preload.go:172] Found /home/jenkins/minikube-integration/19672-9514/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0920 20:47:49.719329   17609 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 20:47:49.719615   17609 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/config.json ...
	I0920 20:47:49.719635   17609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/config.json: {Name:mkfde33f11021d55e33f07d8236ae59c0e285310 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:47:49.733126   17609 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 20:47:49.733211   17609 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0920 20:47:49.733228   17609 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0920 20:47:49.733232   17609 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0920 20:47:49.733239   17609 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0920 20:47:49.733246   17609 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0920 20:48:01.398998   17609 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0920 20:48:01.399032   17609 cache.go:194] Successfully downloaded all kic artifacts
	I0920 20:48:01.399072   17609 start.go:360] acquireMachinesLock for addons-135472: {Name:mkceb13fc1a1aab5ee2d4770834cefb10ed88226 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 20:48:01.399170   17609 start.go:364] duration metric: took 79.648µs to acquireMachinesLock for "addons-135472"
	I0920 20:48:01.399195   17609 start.go:93] Provisioning new machine with config: &{Name:addons-135472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-135472 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 20:48:01.399273   17609 start.go:125] createHost starting for "" (driver="docker")
	I0920 20:48:01.401003   17609 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0920 20:48:01.401234   17609 start.go:159] libmachine.API.Create for "addons-135472" (driver="docker")
	I0920 20:48:01.401274   17609 client.go:168] LocalClient.Create starting
	I0920 20:48:01.401357   17609 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19672-9514/.minikube/certs/ca.pem
	I0920 20:48:01.589320   17609 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19672-9514/.minikube/certs/cert.pem
	I0920 20:48:01.756574   17609 cli_runner.go:164] Run: docker network inspect addons-135472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 20:48:01.771856   17609 cli_runner.go:211] docker network inspect addons-135472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 20:48:01.771916   17609 network_create.go:284] running [docker network inspect addons-135472] to gather additional debugging logs...
	I0920 20:48:01.771932   17609 cli_runner.go:164] Run: docker network inspect addons-135472
	W0920 20:48:01.785948   17609 cli_runner.go:211] docker network inspect addons-135472 returned with exit code 1
	I0920 20:48:01.785970   17609 network_create.go:287] error running [docker network inspect addons-135472]: docker network inspect addons-135472: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-135472 not found
	I0920 20:48:01.785983   17609 network_create.go:289] output of [docker network inspect addons-135472]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-135472 not found
	
	** /stderr **
	I0920 20:48:01.786054   17609 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 20:48:01.799553   17609 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a1a9f0}
	I0920 20:48:01.799590   17609 network_create.go:124] attempt to create docker network addons-135472 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0920 20:48:01.799624   17609 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-135472 addons-135472
	I0920 20:48:01.854867   17609 network_create.go:108] docker network addons-135472 192.168.49.0/24 created
	I0920 20:48:01.854896   17609 kic.go:121] calculated static IP "192.168.49.2" for the "addons-135472" container
	I0920 20:48:01.854949   17609 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 20:48:01.870070   17609 cli_runner.go:164] Run: docker volume create addons-135472 --label name.minikube.sigs.k8s.io=addons-135472 --label created_by.minikube.sigs.k8s.io=true
	I0920 20:48:01.885512   17609 oci.go:103] Successfully created a docker volume addons-135472
	I0920 20:48:01.885585   17609 cli_runner.go:164] Run: docker run --rm --name addons-135472-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-135472 --entrypoint /usr/bin/test -v addons-135472:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0920 20:48:09.020783   17609 cli_runner.go:217] Completed: docker run --rm --name addons-135472-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-135472 --entrypoint /usr/bin/test -v addons-135472:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (7.135150148s)
	I0920 20:48:09.020812   17609 oci.go:107] Successfully prepared a docker volume addons-135472
	I0920 20:48:09.020833   17609 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 20:48:09.020854   17609 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 20:48:09.020912   17609 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19672-9514/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-135472:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 20:48:12.803589   17609 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19672-9514/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-135472:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (3.782641686s)
	I0920 20:48:12.803614   17609 kic.go:203] duration metric: took 3.782758484s to extract preloaded images to volume ...
	W0920 20:48:12.803728   17609 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 20:48:12.803880   17609 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 20:48:12.844031   17609 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-135472 --name addons-135472 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-135472 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-135472 --network addons-135472 --ip 192.168.49.2 --volume addons-135472:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0920 20:48:13.137158   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Running}}
	I0920 20:48:13.154357   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Status}}
	I0920 20:48:13.171762   17609 cli_runner.go:164] Run: docker exec addons-135472 stat /var/lib/dpkg/alternatives/iptables
	I0920 20:48:13.210825   17609 oci.go:144] the created container "addons-135472" has a running status.
	I0920 20:48:13.210851   17609 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa...
	I0920 20:48:13.297881   17609 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 20:48:13.315591   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Status}}
	I0920 20:48:13.330304   17609 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 20:48:13.330323   17609 kic_runner.go:114] Args: [docker exec --privileged addons-135472 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 20:48:13.371368   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Status}}
	I0920 20:48:13.385684   17609 machine.go:93] provisionDockerMachine start ...
	I0920 20:48:13.385756   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:13.399880   17609 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:13.400111   17609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 20:48:13.400129   17609 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 20:48:13.400683   17609 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38192->127.0.0.1:32768: read: connection reset by peer
	I0920 20:48:16.524245   17609 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-135472
	
	I0920 20:48:16.524272   17609 ubuntu.go:169] provisioning hostname "addons-135472"
	I0920 20:48:16.524328   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:16.540143   17609 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:16.540296   17609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 20:48:16.540309   17609 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-135472 && echo "addons-135472" | sudo tee /etc/hostname
	I0920 20:48:16.674058   17609 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-135472
	
	I0920 20:48:16.674118   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:16.688911   17609 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:16.689071   17609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 20:48:16.689087   17609 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-135472' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-135472/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-135472' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 20:48:16.812537   17609 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 20:48:16.812561   17609 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9514/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9514/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9514/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9514/.minikube}
	I0920 20:48:16.812592   17609 ubuntu.go:177] setting up certificates
	I0920 20:48:16.812614   17609 provision.go:84] configureAuth start
	I0920 20:48:16.812659   17609 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-135472
	I0920 20:48:16.827061   17609 provision.go:143] copyHostCerts
	I0920 20:48:16.827124   17609 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9514/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9514/.minikube/ca.pem (1082 bytes)
	I0920 20:48:16.827234   17609 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9514/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9514/.minikube/cert.pem (1123 bytes)
	I0920 20:48:16.827291   17609 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9514/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9514/.minikube/key.pem (1679 bytes)
	I0920 20:48:16.827342   17609 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9514/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9514/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9514/.minikube/certs/ca-key.pem org=jenkins.addons-135472 san=[127.0.0.1 192.168.49.2 addons-135472 localhost minikube]
	I0920 20:48:17.067241   17609 provision.go:177] copyRemoteCerts
	I0920 20:48:17.067290   17609 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 20:48:17.067322   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:17.082987   17609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa Username:docker}
	I0920 20:48:17.172678   17609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9514/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 20:48:17.192362   17609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9514/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 20:48:17.211466   17609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9514/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 20:48:17.230140   17609 provision.go:87] duration metric: took 417.510629ms to configureAuth
	I0920 20:48:17.230164   17609 ubuntu.go:193] setting minikube options for container-runtime
	I0920 20:48:17.230312   17609 config.go:182] Loaded profile config "addons-135472": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 20:48:17.230351   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:17.245397   17609 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:17.245583   17609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 20:48:17.245598   17609 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 20:48:17.369043   17609 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0920 20:48:17.369068   17609 ubuntu.go:71] root file system type: overlay
	I0920 20:48:17.369222   17609 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 20:48:17.369275   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:17.384307   17609 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:17.384451   17609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 20:48:17.384506   17609 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 20:48:17.518528   17609 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 20:48:17.518608   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:17.533343   17609 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:17.533536   17609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 20:48:17.533561   17609 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 20:48:18.171247   17609 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-19 14:24:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-20 20:48:17.512989472 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0920 20:48:18.171272   17609 machine.go:96] duration metric: took 4.785570882s to provisionDockerMachine
	I0920 20:48:18.171283   17609 client.go:171] duration metric: took 16.770001018s to LocalClient.Create
	I0920 20:48:18.171298   17609 start.go:167] duration metric: took 16.770066651s to libmachine.API.Create "addons-135472"
	I0920 20:48:18.171305   17609 start.go:293] postStartSetup for "addons-135472" (driver="docker")
	I0920 20:48:18.171314   17609 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 20:48:18.171358   17609 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 20:48:18.171389   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:18.186559   17609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa Username:docker}
	I0920 20:48:18.276869   17609 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 20:48:18.279370   17609 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 20:48:18.279399   17609 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 20:48:18.279406   17609 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 20:48:18.279414   17609 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 20:48:18.279423   17609 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9514/.minikube/addons for local assets ...
	I0920 20:48:18.279468   17609 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9514/.minikube/files for local assets ...
	I0920 20:48:18.279491   17609 start.go:296] duration metric: took 108.180687ms for postStartSetup
	I0920 20:48:18.279717   17609 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-135472
	I0920 20:48:18.294318   17609 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/config.json ...
	I0920 20:48:18.294525   17609 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 20:48:18.294561   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:18.308963   17609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa Username:docker}
	I0920 20:48:18.397264   17609 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 20:48:18.400802   17609 start.go:128] duration metric: took 17.001517315s to createHost
	I0920 20:48:18.400821   17609 start.go:83] releasing machines lock for "addons-135472", held for 17.001637211s
	I0920 20:48:18.400867   17609 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-135472
	I0920 20:48:18.416868   17609 ssh_runner.go:195] Run: cat /version.json
	I0920 20:48:18.416906   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:18.416965   17609 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 20:48:18.417036   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:18.432505   17609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa Username:docker}
	I0920 20:48:18.433800   17609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa Username:docker}
	I0920 20:48:18.516201   17609 ssh_runner.go:195] Run: systemctl --version
	I0920 20:48:18.584952   17609 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 20:48:18.588499   17609 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0920 20:48:18.608767   17609 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0920 20:48:18.608814   17609 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 20:48:18.631114   17609 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 20:48:18.631131   17609 start.go:495] detecting cgroup driver to use...
	I0920 20:48:18.631159   17609 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 20:48:18.631257   17609 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 20:48:18.643481   17609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 20:48:18.650963   17609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 20:48:18.658231   17609 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 20:48:18.658278   17609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 20:48:18.665986   17609 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 20:48:18.673571   17609 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 20:48:18.680962   17609 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 20:48:18.688614   17609 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 20:48:18.695951   17609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 20:48:18.703681   17609 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 20:48:18.711443   17609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 20:48:18.719266   17609 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 20:48:18.725887   17609 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 20:48:18.725923   17609 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 20:48:18.737216   17609 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 20:48:18.743894   17609 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:18.811730   17609 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 20:48:18.889082   17609 start.go:495] detecting cgroup driver to use...
	I0920 20:48:18.889128   17609 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 20:48:18.889192   17609 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 20:48:18.898926   17609 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0920 20:48:18.898983   17609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 20:48:18.908643   17609 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 20:48:18.922384   17609 ssh_runner.go:195] Run: which cri-dockerd
	I0920 20:48:18.925128   17609 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 20:48:18.932915   17609 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0920 20:48:18.948226   17609 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 20:48:19.029786   17609 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 20:48:19.111221   17609 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 20:48:19.111352   17609 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 20:48:19.125682   17609 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:19.193033   17609 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 20:48:19.421870   17609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 20:48:19.431781   17609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 20:48:19.441015   17609 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 20:48:19.515978   17609 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 20:48:19.587593   17609 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:19.659324   17609 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 20:48:19.669757   17609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 20:48:19.678122   17609 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:19.747744   17609 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 20:48:19.801221   17609 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 20:48:19.801294   17609 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 20:48:19.804335   17609 start.go:563] Will wait 60s for crictl version
	I0920 20:48:19.804382   17609 ssh_runner.go:195] Run: which crictl
	I0920 20:48:19.807201   17609 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 20:48:19.835847   17609 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0920 20:48:19.835890   17609 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 20:48:19.856171   17609 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 20:48:19.876877   17609 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0920 20:48:19.876957   17609 cli_runner.go:164] Run: docker network inspect addons-135472 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 20:48:19.891947   17609 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 20:48:19.894906   17609 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 20:48:19.904191   17609 kubeadm.go:883] updating cluster {Name:addons-135472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-135472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 20:48:19.904300   17609 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 20:48:19.904345   17609 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 20:48:19.920505   17609 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 20:48:19.920522   17609 docker.go:615] Images already preloaded, skipping extraction
	I0920 20:48:19.920569   17609 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 20:48:19.935725   17609 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 20:48:19.935747   17609 cache_images.go:84] Images are preloaded, skipping loading
	I0920 20:48:19.935759   17609 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0920 20:48:19.935869   17609 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-135472 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-135472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 20:48:19.935918   17609 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 20:48:19.975115   17609 cni.go:84] Creating CNI manager for ""
	I0920 20:48:19.975139   17609 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 20:48:19.975148   17609 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 20:48:19.975168   17609 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-135472 NodeName:addons-135472 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 20:48:19.975326   17609 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-135472"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 20:48:19.975388   17609 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 20:48:19.982751   17609 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 20:48:19.982812   17609 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 20:48:19.989828   17609 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0920 20:48:20.004244   17609 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 20:48:20.018176   17609 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0920 20:48:20.032748   17609 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0920 20:48:20.035627   17609 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 20:48:20.044378   17609 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:20.112623   17609 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 20:48:20.123229   17609 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472 for IP: 192.168.49.2
	I0920 20:48:20.123243   17609 certs.go:194] generating shared ca certs ...
	I0920 20:48:20.123257   17609 certs.go:226] acquiring lock for ca certs: {Name:mke171823f01199a0b3b7794b5263fc14bd774ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:20.123357   17609 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9514/.minikube/ca.key
	I0920 20:48:20.375201   17609 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9514/.minikube/ca.crt ...
	I0920 20:48:20.375224   17609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9514/.minikube/ca.crt: {Name:mkde573a47ae4b9856c76951253e219432b4eacd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:20.375361   17609 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9514/.minikube/ca.key ...
	I0920 20:48:20.375370   17609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9514/.minikube/ca.key: {Name:mk5da5948f111a983464b660fda183591ee045d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:20.375438   17609 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9514/.minikube/proxy-client-ca.key
	I0920 20:48:20.416140   17609 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9514/.minikube/proxy-client-ca.crt ...
	I0920 20:48:20.416158   17609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9514/.minikube/proxy-client-ca.crt: {Name:mk66f7340eaef942dd46c29e9b11fc3e8e281e80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:20.416260   17609 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9514/.minikube/proxy-client-ca.key ...
	I0920 20:48:20.416269   17609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9514/.minikube/proxy-client-ca.key: {Name:mk8b151339bb4f28791e62f764eef3ab92c0dc48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:20.416328   17609 certs.go:256] generating profile certs ...
	I0920 20:48:20.416373   17609 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.key
	I0920 20:48:20.416387   17609 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt with IP's: []
	I0920 20:48:20.550916   17609 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt ...
	I0920 20:48:20.550939   17609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: {Name:mk3a2f3373d9f9f36ab1e1e7e369e2a555d1d88e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:20.551062   17609 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.key ...
	I0920 20:48:20.551073   17609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.key: {Name:mkfdc35a57265d3d42d627cc325844d768ba3ab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:20.551136   17609 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/apiserver.key.247b41b2
	I0920 20:48:20.551153   17609 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/apiserver.crt.247b41b2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0920 20:48:20.714474   17609 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/apiserver.crt.247b41b2 ...
	I0920 20:48:20.714491   17609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/apiserver.crt.247b41b2: {Name:mk5c0a743de50447e0822476f4f8dc2a5d7a95a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:20.714598   17609 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/apiserver.key.247b41b2 ...
	I0920 20:48:20.714609   17609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/apiserver.key.247b41b2: {Name:mkfc5dd59d8f4f92c08b559ca341314acca771c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:20.714671   17609 certs.go:381] copying /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/apiserver.crt.247b41b2 -> /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/apiserver.crt
	I0920 20:48:20.714736   17609 certs.go:385] copying /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/apiserver.key.247b41b2 -> /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/apiserver.key
	I0920 20:48:20.714780   17609 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/proxy-client.key
	I0920 20:48:20.714795   17609 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/proxy-client.crt with IP's: []
	I0920 20:48:20.822924   17609 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/proxy-client.crt ...
	I0920 20:48:20.822943   17609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/proxy-client.crt: {Name:mka900140cba3b2c5393e2ede0869264da1a04e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:20.823061   17609 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/proxy-client.key ...
	I0920 20:48:20.823073   17609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/proxy-client.key: {Name:mkd2f80bf4c8cdca636f9a2dfe3e391857631ed1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:20.823276   17609 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9514/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 20:48:20.823305   17609 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9514/.minikube/certs/ca.pem (1082 bytes)
	I0920 20:48:20.823340   17609 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9514/.minikube/certs/cert.pem (1123 bytes)
	I0920 20:48:20.823375   17609 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9514/.minikube/certs/key.pem (1679 bytes)
	I0920 20:48:20.824108   17609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9514/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 20:48:20.844285   17609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9514/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 20:48:20.862887   17609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9514/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 20:48:20.881346   17609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9514/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 20:48:20.900059   17609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 20:48:20.919341   17609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 20:48:20.938260   17609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 20:48:20.957297   17609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 20:48:20.976071   17609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9514/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 20:48:20.995511   17609 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 20:48:21.010955   17609 ssh_runner.go:195] Run: openssl version
	I0920 20:48:21.015418   17609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 20:48:21.023147   17609 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:48:21.026020   17609 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:48:21.026062   17609 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:48:21.031637   17609 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 20:48:21.038860   17609 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 20:48:21.041418   17609 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 20:48:21.041456   17609 kubeadm.go:392] StartCluster: {Name:addons-135472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-135472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 20:48:21.041571   17609 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 20:48:21.056504   17609 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 20:48:21.063238   17609 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 20:48:21.070170   17609 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 20:48:21.070206   17609 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 20:48:21.076912   17609 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 20:48:21.076925   17609 kubeadm.go:157] found existing configuration files:
	
	I0920 20:48:21.076952   17609 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 20:48:21.083514   17609 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 20:48:21.083552   17609 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 20:48:21.090077   17609 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 20:48:21.096630   17609 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 20:48:21.096668   17609 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 20:48:21.102972   17609 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 20:48:21.109577   17609 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 20:48:21.109614   17609 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 20:48:21.116014   17609 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 20:48:21.122802   17609 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 20:48:21.122832   17609 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 20:48:21.130046   17609 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 20:48:21.161311   17609 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 20:48:21.161374   17609 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 20:48:21.178871   17609 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0920 20:48:21.178950   17609 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0920 20:48:21.178999   17609 kubeadm.go:310] OS: Linux
	I0920 20:48:21.179073   17609 kubeadm.go:310] CGROUPS_CPU: enabled
	I0920 20:48:21.179147   17609 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0920 20:48:21.179231   17609 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0920 20:48:21.179303   17609 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0920 20:48:21.179376   17609 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0920 20:48:21.179418   17609 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0920 20:48:21.179457   17609 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0920 20:48:21.179499   17609 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0920 20:48:21.179548   17609 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0920 20:48:21.223726   17609 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 20:48:21.223865   17609 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 20:48:21.224002   17609 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 20:48:21.232797   17609 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 20:48:21.235307   17609 out.go:235]   - Generating certificates and keys ...
	I0920 20:48:21.235406   17609 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 20:48:21.235510   17609 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 20:48:21.341616   17609 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 20:48:21.473249   17609 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 20:48:21.572138   17609 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 20:48:21.888300   17609 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 20:48:21.986111   17609 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 20:48:21.986242   17609 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-135472 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 20:48:22.150529   17609 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 20:48:22.150680   17609 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-135472 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 20:48:22.582891   17609 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 20:48:22.934736   17609 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 20:48:23.050268   17609 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 20:48:23.050369   17609 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 20:48:23.263098   17609 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 20:48:23.399518   17609 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 20:48:23.551643   17609 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 20:48:23.614385   17609 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 20:48:23.903381   17609 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 20:48:23.903865   17609 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 20:48:23.906169   17609 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 20:48:23.908369   17609 out.go:235]   - Booting up control plane ...
	I0920 20:48:23.908492   17609 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 20:48:23.908576   17609 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 20:48:23.908651   17609 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 20:48:23.919872   17609 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 20:48:23.925188   17609 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 20:48:23.925276   17609 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 20:48:24.002095   17609 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 20:48:24.002251   17609 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 20:48:25.003839   17609 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001682763s
	I0920 20:48:25.003943   17609 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 20:48:29.004985   17609 kubeadm.go:310] [api-check] The API server is healthy after 4.001248073s
	I0920 20:48:29.026516   17609 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 20:48:29.035098   17609 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 20:48:29.048606   17609 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 20:48:29.048884   17609 kubeadm.go:310] [mark-control-plane] Marking the node addons-135472 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 20:48:29.054777   17609 kubeadm.go:310] [bootstrap-token] Using token: yog870.4e36o6rp1fgnt41w
	I0920 20:48:29.056275   17609 out.go:235]   - Configuring RBAC rules ...
	I0920 20:48:29.056417   17609 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 20:48:29.058672   17609 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 20:48:29.063316   17609 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 20:48:29.065318   17609 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 20:48:29.067219   17609 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 20:48:29.069924   17609 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 20:48:29.410081   17609 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 20:48:29.826041   17609 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 20:48:30.408968   17609 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 20:48:30.409659   17609 kubeadm.go:310] 
	I0920 20:48:30.409776   17609 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 20:48:30.409795   17609 kubeadm.go:310] 
	I0920 20:48:30.409885   17609 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 20:48:30.409896   17609 kubeadm.go:310] 
	I0920 20:48:30.409917   17609 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 20:48:30.409975   17609 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 20:48:30.410022   17609 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 20:48:30.410028   17609 kubeadm.go:310] 
	I0920 20:48:30.410087   17609 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 20:48:30.410094   17609 kubeadm.go:310] 
	I0920 20:48:30.410133   17609 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 20:48:30.410150   17609 kubeadm.go:310] 
	I0920 20:48:30.410235   17609 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 20:48:30.410343   17609 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 20:48:30.410430   17609 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 20:48:30.410441   17609 kubeadm.go:310] 
	I0920 20:48:30.410561   17609 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 20:48:30.410663   17609 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 20:48:30.410672   17609 kubeadm.go:310] 
	I0920 20:48:30.410796   17609 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yog870.4e36o6rp1fgnt41w \
	I0920 20:48:30.410881   17609 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f491762512938f72e6c8d2a7fb1aa6e0bfc1ffa5d0eb3a7bd12200f4fb3d9bd7 \
	I0920 20:48:30.410899   17609 kubeadm.go:310] 	--control-plane 
	I0920 20:48:30.410906   17609 kubeadm.go:310] 
	I0920 20:48:30.411010   17609 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 20:48:30.411021   17609 kubeadm.go:310] 
	I0920 20:48:30.411126   17609 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yog870.4e36o6rp1fgnt41w \
	I0920 20:48:30.411251   17609 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f491762512938f72e6c8d2a7fb1aa6e0bfc1ffa5d0eb3a7bd12200f4fb3d9bd7 
	I0920 20:48:30.412555   17609 kubeadm.go:310] W0920 20:48:21.159026    1926 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 20:48:30.412816   17609 kubeadm.go:310] W0920 20:48:21.159550    1926 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 20:48:30.412990   17609 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0920 20:48:30.413113   17609 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 20:48:30.413139   17609 cni.go:84] Creating CNI manager for ""
	I0920 20:48:30.413159   17609 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 20:48:30.415033   17609 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 20:48:30.417760   17609 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 20:48:30.425472   17609 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 20:48:30.440471   17609 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 20:48:30.440529   17609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:30.440555   17609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-135472 minikube.k8s.io/updated_at=2024_09_20T20_48_30_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=addons-135472 minikube.k8s.io/primary=true
	I0920 20:48:30.446893   17609 ops.go:34] apiserver oom_adj: -16
	I0920 20:48:30.506221   17609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:31.006863   17609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:31.506401   17609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:32.006877   17609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:32.506272   17609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:33.006765   17609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:33.506709   17609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:34.007005   17609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:34.506475   17609 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:34.564756   17609 kubeadm.go:1113] duration metric: took 4.124274043s to wait for elevateKubeSystemPrivileges
	I0920 20:48:34.564796   17609 kubeadm.go:394] duration metric: took 13.523341077s to StartCluster
	I0920 20:48:34.564817   17609 settings.go:142] acquiring lock: {Name:mk599dae52b8e72abfd50bf7fe2ec2d4b59104d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:34.564911   17609 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9514/kubeconfig
	I0920 20:48:34.565246   17609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9514/kubeconfig: {Name:mk83fe77f0521522a623481e5a97162528173507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:34.565410   17609 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 20:48:34.565414   17609 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 20:48:34.565485   17609 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 20:48:34.565614   17609 addons.go:69] Setting yakd=true in profile "addons-135472"
	I0920 20:48:34.565617   17609 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-135472"
	I0920 20:48:34.565627   17609 config.go:182] Loaded profile config "addons-135472": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 20:48:34.565650   17609 addons.go:234] Setting addon yakd=true in "addons-135472"
	I0920 20:48:34.565669   17609 addons.go:69] Setting cloud-spanner=true in profile "addons-135472"
	I0920 20:48:34.565676   17609 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-135472"
	I0920 20:48:34.565680   17609 host.go:66] Checking if "addons-135472" exists ...
	I0920 20:48:34.565686   17609 addons.go:234] Setting addon cloud-spanner=true in "addons-135472"
	I0920 20:48:34.565700   17609 host.go:66] Checking if "addons-135472" exists ...
	I0920 20:48:34.565710   17609 host.go:66] Checking if "addons-135472" exists ...
	I0920 20:48:34.565786   17609 addons.go:69] Setting metrics-server=true in profile "addons-135472"
	I0920 20:48:34.565823   17609 addons.go:234] Setting addon metrics-server=true in "addons-135472"
	I0920 20:48:34.565853   17609 host.go:66] Checking if "addons-135472" exists ...
	I0920 20:48:34.565926   17609 addons.go:69] Setting storage-provisioner=true in profile "addons-135472"
	I0920 20:48:34.565952   17609 addons.go:234] Setting addon storage-provisioner=true in "addons-135472"
	I0920 20:48:34.565972   17609 addons.go:69] Setting volcano=true in profile "addons-135472"
	I0920 20:48:34.566014   17609 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-135472"
	I0920 20:48:34.566033   17609 addons.go:234] Setting addon volcano=true in "addons-135472"
	I0920 20:48:34.566042   17609 addons.go:69] Setting registry=true in profile "addons-135472"
	I0920 20:48:34.566057   17609 addons.go:234] Setting addon registry=true in "addons-135472"
	I0920 20:48:34.566078   17609 host.go:66] Checking if "addons-135472" exists ...
	I0920 20:48:34.566078   17609 host.go:66] Checking if "addons-135472" exists ...
	I0920 20:48:34.566251   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Status}}
	I0920 20:48:34.566320   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Status}}
	I0920 20:48:34.566344   17609 addons.go:69] Setting volumesnapshots=true in profile "addons-135472"
	I0920 20:48:34.566368   17609 addons.go:234] Setting addon volumesnapshots=true in "addons-135472"
	I0920 20:48:34.566394   17609 host.go:66] Checking if "addons-135472" exists ...
	I0920 20:48:34.566470   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Status}}
	I0920 20:48:34.566534   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Status}}
	I0920 20:48:34.566792   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Status}}
	I0920 20:48:34.566928   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Status}}
	I0920 20:48:34.566321   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Status}}
	I0920 20:48:34.566035   17609 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-135472"
	I0920 20:48:34.567318   17609 host.go:66] Checking if "addons-135472" exists ...
	I0920 20:48:34.567369   17609 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-135472"
	I0920 20:48:34.567397   17609 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-135472"
	I0920 20:48:34.567696   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Status}}
	I0920 20:48:34.567837   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Status}}
	I0920 20:48:34.566004   17609 host.go:66] Checking if "addons-135472" exists ...
	I0920 20:48:34.568301   17609 out.go:177] * Verifying Kubernetes components...
	I0920 20:48:34.568418   17609 addons.go:69] Setting ingress-dns=true in profile "addons-135472"
	I0920 20:48:34.568640   17609 addons.go:234] Setting addon ingress-dns=true in "addons-135472"
	I0920 20:48:34.568676   17609 host.go:66] Checking if "addons-135472" exists ...
	I0920 20:48:34.569291   17609 addons.go:69] Setting inspektor-gadget=true in profile "addons-135472"
	I0920 20:48:34.569318   17609 addons.go:234] Setting addon inspektor-gadget=true in "addons-135472"
	I0920 20:48:34.569351   17609 host.go:66] Checking if "addons-135472" exists ...
	I0920 20:48:34.569603   17609 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:34.569811   17609 addons.go:69] Setting gcp-auth=true in profile "addons-135472"
	I0920 20:48:34.569835   17609 mustload.go:65] Loading cluster: addons-135472
	I0920 20:48:34.569931   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Status}}
	I0920 20:48:34.570008   17609 config.go:182] Loaded profile config "addons-135472": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 20:48:34.570254   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Status}}
	I0920 20:48:34.571772   17609 addons.go:69] Setting default-storageclass=true in profile "addons-135472"
	I0920 20:48:34.571820   17609 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-135472"
	I0920 20:48:34.571905   17609 addons.go:69] Setting ingress=true in profile "addons-135472"
	I0920 20:48:34.571939   17609 addons.go:234] Setting addon ingress=true in "addons-135472"
	I0920 20:48:34.571986   17609 host.go:66] Checking if "addons-135472" exists ...
	I0920 20:48:34.569235   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Status}}
	I0920 20:48:34.593960   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Status}}
	I0920 20:48:34.594401   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Status}}
	I0920 20:48:34.594978   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Status}}
	I0920 20:48:34.602430   17609 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 20:48:34.605983   17609 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 20:48:34.606006   17609 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 20:48:34.606060   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:34.610453   17609 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 20:48:34.611656   17609 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 20:48:34.612772   17609 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 20:48:34.613882   17609 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 20:48:34.614977   17609 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 20:48:34.616146   17609 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 20:48:34.617189   17609 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 20:48:34.618339   17609 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 20:48:34.619336   17609 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 20:48:34.619359   17609 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 20:48:34.619410   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:34.621014   17609 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 20:48:34.621999   17609 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 20:48:34.622017   17609 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 20:48:34.622063   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:34.623662   17609 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0920 20:48:34.624954   17609 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 20:48:34.626021   17609 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0920 20:48:34.627043   17609 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 20:48:34.628125   17609 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 20:48:34.628141   17609 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 20:48:34.628190   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:34.629342   17609 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0920 20:48:34.630002   17609 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 20:48:34.631392   17609 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 20:48:34.631408   17609 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 20:48:34.631452   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:34.643376   17609 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 20:48:34.643406   17609 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0920 20:48:34.643577   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:34.648489   17609 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 20:48:34.649669   17609 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 20:48:34.649745   17609 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 20:48:34.649762   17609 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 20:48:34.649827   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:34.651120   17609 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 20:48:34.651141   17609 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 20:48:34.651193   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:34.652602   17609 addons.go:234] Setting addon default-storageclass=true in "addons-135472"
	I0920 20:48:34.652609   17609 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-135472"
	I0920 20:48:34.652630   17609 host.go:66] Checking if "addons-135472" exists ...
	I0920 20:48:34.652654   17609 host.go:66] Checking if "addons-135472" exists ...
	I0920 20:48:34.653006   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Status}}
	I0920 20:48:34.653236   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Status}}
	I0920 20:48:34.656799   17609 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 20:48:34.658966   17609 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 20:48:34.658984   17609 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 20:48:34.659030   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:34.661631   17609 host.go:66] Checking if "addons-135472" exists ...
	I0920 20:48:34.664718   17609 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 20:48:34.665807   17609 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 20:48:34.665864   17609 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 20:48:34.667281   17609 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 20:48:34.667313   17609 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 20:48:34.667324   17609 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 20:48:34.667370   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:34.668562   17609 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 20:48:34.668578   17609 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 20:48:34.668655   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:34.674350   17609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa Username:docker}
	I0920 20:48:34.682432   17609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa Username:docker}
	I0920 20:48:34.686639   17609 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 20:48:34.687671   17609 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 20:48:34.687689   17609 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 20:48:34.687738   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:34.705056   17609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa Username:docker}
	I0920 20:48:34.715840   17609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa Username:docker}
	I0920 20:48:34.718518   17609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa Username:docker}
	I0920 20:48:34.721207   17609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa Username:docker}
	I0920 20:48:34.723049   17609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa Username:docker}
	I0920 20:48:34.724655   17609 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 20:48:34.724670   17609 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 20:48:34.724721   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:34.730039   17609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa Username:docker}
	I0920 20:48:34.731409   17609 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 20:48:34.732645   17609 out.go:177]   - Using image docker.io/busybox:stable
	I0920 20:48:34.732656   17609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa Username:docker}
	I0920 20:48:34.733858   17609 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 20:48:34.733880   17609 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 20:48:34.733925   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:34.735101   17609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa Username:docker}
	I0920 20:48:34.737313   17609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa Username:docker}
	I0920 20:48:34.738674   17609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa Username:docker}
	I0920 20:48:34.743716   17609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa Username:docker}
	W0920 20:48:34.748390   17609 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0920 20:48:34.748416   17609 retry.go:31] will retry after 138.91814ms: ssh: handshake failed: EOF
	W0920 20:48:34.748496   17609 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0920 20:48:34.748507   17609 retry.go:31] will retry after 344.583853ms: ssh: handshake failed: EOF
	I0920 20:48:34.750510   17609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa Username:docker}
	I0920 20:48:34.769075   17609 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 20:48:34.769181   17609 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 20:48:34.958394   17609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 20:48:35.059448   17609 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 20:48:35.059519   17609 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 20:48:35.066023   17609 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 20:48:35.066102   17609 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 20:48:35.069929   17609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 20:48:35.072819   17609 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 20:48:35.072878   17609 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 20:48:35.178052   17609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 20:48:35.260057   17609 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 20:48:35.260140   17609 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 20:48:35.263246   17609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 20:48:35.263558   17609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 20:48:35.268567   17609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 20:48:35.273752   17609 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 20:48:35.273806   17609 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 20:48:35.275175   17609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 20:48:35.278733   17609 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 20:48:35.278782   17609 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 20:48:35.460109   17609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 20:48:35.560825   17609 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 20:48:35.561072   17609 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 20:48:35.561042   17609 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 20:48:35.561213   17609 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 20:48:35.563768   17609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 20:48:35.566244   17609 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 20:48:35.566263   17609 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 20:48:35.759254   17609 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 20:48:35.759349   17609 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 20:48:35.878074   17609 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 20:48:35.878152   17609 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 20:48:35.963522   17609 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 20:48:35.963609   17609 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 20:48:36.160247   17609 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 20:48:36.160334   17609 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 20:48:36.179661   17609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 20:48:36.262143   17609 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.492937609s)
	I0920 20:48:36.263284   17609 node_ready.go:35] waiting up to 6m0s for node "addons-135472" to be "Ready" ...
	I0920 20:48:36.263473   17609 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.494369796s)
	I0920 20:48:36.263615   17609 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0920 20:48:36.269896   17609 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 20:48:36.269980   17609 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 20:48:36.271742   17609 node_ready.go:49] node "addons-135472" has status "Ready":"True"
	I0920 20:48:36.271801   17609 node_ready.go:38] duration metric: took 8.43969ms for node "addons-135472" to be "Ready" ...
	I0920 20:48:36.271823   17609 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 20:48:36.280350   17609 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-56tpz" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:36.468027   17609 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 20:48:36.468108   17609 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 20:48:36.767906   17609 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-135472" context rescaled to 1 replicas
	I0920 20:48:36.774489   17609 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 20:48:36.774571   17609 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 20:48:36.858004   17609 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 20:48:36.858077   17609 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 20:48:36.871324   17609 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 20:48:36.871353   17609 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 20:48:36.974931   17609 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 20:48:36.974962   17609 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 20:48:37.167483   17609 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 20:48:37.167563   17609 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 20:48:37.278092   17609 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 20:48:37.278166   17609 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 20:48:37.363983   17609 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 20:48:37.364061   17609 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 20:48:37.475827   17609 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 20:48:37.475926   17609 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 20:48:37.659209   17609 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 20:48:37.659294   17609 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 20:48:37.766169   17609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 20:48:37.770831   17609 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 20:48:37.770858   17609 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 20:48:37.874644   17609 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 20:48:37.874671   17609 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 20:48:38.073388   17609 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 20:48:38.073418   17609 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 20:48:38.259194   17609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 20:48:38.368016   17609 pod_ready.go:103] pod "coredns-7c65d6cfc9-56tpz" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:38.460475   17609 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 20:48:38.460506   17609 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 20:48:38.762492   17609 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 20:48:38.762524   17609 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 20:48:38.881924   17609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 20:48:38.969423   17609 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 20:48:38.969454   17609 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 20:48:39.759878   17609 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 20:48:39.759935   17609 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 20:48:40.369315   17609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 20:48:40.864134   17609 pod_ready.go:103] pod "coredns-7c65d6cfc9-56tpz" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:41.672067   17609 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 20:48:41.672135   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:41.692146   17609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa Username:docker}
	I0920 20:48:42.371297   17609 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 20:48:42.661493   17609 addons.go:234] Setting addon gcp-auth=true in "addons-135472"
	I0920 20:48:42.661590   17609 host.go:66] Checking if "addons-135472" exists ...
	I0920 20:48:42.662157   17609 cli_runner.go:164] Run: docker container inspect addons-135472 --format={{.State.Status}}
	I0920 20:48:42.689456   17609 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 20:48:42.689506   17609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135472
	I0920 20:48:42.704663   17609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/addons-135472/id_rsa Username:docker}
	I0920 20:48:42.866432   17609 pod_ready.go:103] pod "coredns-7c65d6cfc9-56tpz" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:44.871064   17609 pod_ready.go:103] pod "coredns-7c65d6cfc9-56tpz" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:45.680251   17609 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.610283579s)
	I0920 20:48:45.680265   17609 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.72182263s)
	I0920 20:48:45.680327   17609 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.502174549s)
	I0920 20:48:45.680380   17609 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (10.416765041s)
	I0920 20:48:45.680468   17609 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.417198836s)
	I0920 20:48:45.680620   17609 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.412030752s)
	I0920 20:48:45.680704   17609 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.405480513s)
	I0920 20:48:45.680794   17609 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.220597582s)
	I0920 20:48:45.680966   17609 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.117171814s)
	I0920 20:48:45.681015   17609 addons.go:475] Verifying addon registry=true in "addons-135472"
	I0920 20:48:45.681074   17609 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.501383218s)
	I0920 20:48:45.681098   17609 addons.go:475] Verifying addon metrics-server=true in "addons-135472"
	I0920 20:48:45.681249   17609 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.422019813s)
	W0920 20:48:45.681281   17609 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 20:48:45.681303   17609 addons.go:475] Verifying addon ingress=true in "addons-135472"
	I0920 20:48:45.681305   17609 retry.go:31] will retry after 127.870382ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 20:48:45.681151   17609 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.914895514s)
	I0920 20:48:45.681371   17609 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.799419166s)
	I0920 20:48:45.682626   17609 out.go:177] * Verifying registry addon...
	I0920 20:48:45.758698   17609 out.go:177] * Verifying ingress addon...
	I0920 20:48:45.758749   17609 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-135472 service yakd-dashboard -n yakd-dashboard
	
	I0920 20:48:45.760943   17609 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 20:48:45.762766   17609 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 20:48:45.765790   17609 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 20:48:45.766393   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0920 20:48:45.766154   17609 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 20:48:45.809996   17609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 20:48:45.875963   17609 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 20:48:45.876040   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:46.265587   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:46.266869   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:46.765926   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:46.768108   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:47.264264   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:47.267593   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:47.285707   17609 pod_ready.go:103] pod "coredns-7c65d6cfc9-56tpz" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:47.567446   17609 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.198029717s)
	I0920 20:48:47.567678   17609 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-135472"
	I0920 20:48:47.567638   17609 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.878156713s)
	I0920 20:48:47.569064   17609 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 20:48:47.569185   17609 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 20:48:47.570317   17609 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 20:48:47.571328   17609 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 20:48:47.571345   17609 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 20:48:47.571368   17609 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 20:48:47.578199   17609 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 20:48:47.578231   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:47.593800   17609 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 20:48:47.593819   17609 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 20:48:47.669255   17609 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 20:48:47.669280   17609 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 20:48:47.689314   17609 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 20:48:47.764469   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:47.766871   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:48.075579   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:48.264637   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:48.266720   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:48.286936   17609 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.476896466s)
	I0920 20:48:48.576270   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:48.764590   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:48.766773   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:48.896015   17609 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.206658245s)
	I0920 20:48:48.898081   17609 addons.go:475] Verifying addon gcp-auth=true in "addons-135472"
	I0920 20:48:48.899472   17609 out.go:177] * Verifying gcp-auth addon...
	I0920 20:48:48.901430   17609 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 20:48:48.958594   17609 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 20:48:49.076069   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:49.264215   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:49.266158   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:49.287629   17609 pod_ready.go:98] pod "coredns-7c65d6cfc9-56tpz" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:49 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:35 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:35 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:35 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:35 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-20 20:48:35 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 20:48:38 +0000 UTC,FinishedAt:2024-09-20 20:48:48 +0000 UTC,ContainerID:docker://c27c477250702f87f29cc8f546ca3d4525085e182a8ae81d81ac9e05d6fcec4b,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://c27c477250702f87f29cc8f546ca3d4525085e182a8ae81d81ac9e05d6fcec4b Started:0xc0014161d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000883430} {Name:kube-api-access-h9w89 MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0xc000883440}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 20:48:49.287656   17609 pod_ready.go:82] duration metric: took 13.007248912s for pod "coredns-7c65d6cfc9-56tpz" in "kube-system" namespace to be "Ready" ...
	E0920 20:48:49.287670   17609 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-56tpz" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:49 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:35 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:35 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:35 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:35 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-20 20:48:35 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 20:48:38 +0000 UTC,FinishedAt:2024-09-20 20:48:48 +0000 UTC,ContainerID:docker://c27c477250702f87f29cc8f546ca3d4525085e182a8ae81d81ac9e05d6fcec4b,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://c27c477250702f87f29cc8f546ca3d4525085e182a8ae81d81ac9e05d6fcec4b Started:0xc0014161d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000883430} {Name:kube-api-access-h9w89 MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc000883440}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 20:48:49.287682   17609 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7m4lj" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:49.575593   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:49.765016   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:49.766033   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:50.075905   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:50.264275   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:50.266361   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:50.575600   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:50.765637   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:50.767017   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:51.075885   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:51.266629   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:51.267542   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:51.291812   17609 pod_ready.go:103] pod "coredns-7c65d6cfc9-7m4lj" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:51.576422   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:51.764306   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:51.766564   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:52.075315   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:52.264300   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:52.266178   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:52.575722   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:52.789716   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:52.790137   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:53.075757   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:53.264165   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:53.266347   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:53.293119   17609 pod_ready.go:103] pod "coredns-7c65d6cfc9-7m4lj" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:53.575763   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:53.763937   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:53.765813   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:54.075671   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:54.264842   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:54.266617   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:54.575377   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:54.765041   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:54.766180   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:55.075908   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:55.264729   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:55.266441   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:55.574807   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:55.764704   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:55.765976   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:55.792794   17609 pod_ready.go:103] pod "coredns-7c65d6cfc9-7m4lj" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:56.076201   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:56.264488   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:56.266758   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:56.575131   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:56.764414   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:56.766325   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:57.076114   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:57.264223   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:57.266466   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:57.575517   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:57.764381   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:57.766174   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:58.074678   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:58.265308   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:58.265841   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:58.291913   17609 pod_ready.go:103] pod "coredns-7c65d6cfc9-7m4lj" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:58.575385   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:58.765256   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:58.770408   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:59.076416   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:59.265105   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:59.267139   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:59.576029   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:59.764372   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:59.766664   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:00.075474   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:00.264648   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:00.266437   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:00.576024   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:00.765038   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:00.766211   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:00.793326   17609 pod_ready.go:103] pod "coredns-7c65d6cfc9-7m4lj" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:01.075890   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:01.263923   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:01.266054   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:01.575303   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:01.764064   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:01.765914   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:02.075457   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:02.265110   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:02.265685   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:02.575456   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:02.765031   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:02.766111   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:03.075880   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:03.263781   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:03.266275   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:03.292949   17609 pod_ready.go:103] pod "coredns-7c65d6cfc9-7m4lj" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:03.577751   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:03.764827   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:03.765961   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:04.076632   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:04.264729   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:04.266831   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:04.585079   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:04.764489   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:04.766157   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:05.075360   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:05.264668   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:05.265662   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:05.575359   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:05.764151   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:05.766409   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:05.792805   17609 pod_ready.go:103] pod "coredns-7c65d6cfc9-7m4lj" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:06.075734   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:06.264466   17609 kapi.go:107] duration metric: took 20.503519899s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 20:49:06.267308   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:06.574532   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:06.766856   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:07.076163   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:07.266621   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:07.575073   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:07.766823   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:08.075658   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:08.266666   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:08.292260   17609 pod_ready.go:103] pod "coredns-7c65d6cfc9-7m4lj" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:08.575663   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:08.767165   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:09.076537   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:09.266786   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:09.575169   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:09.767122   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:10.076156   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:10.266449   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:10.313307   17609 pod_ready.go:103] pod "coredns-7c65d6cfc9-7m4lj" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:10.633903   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:10.766440   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:11.075298   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:11.270862   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:11.575651   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:11.767047   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:12.076031   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:12.266622   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:12.681313   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:12.766335   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:12.792527   17609 pod_ready.go:103] pod "coredns-7c65d6cfc9-7m4lj" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:13.076521   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:13.267001   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:13.575241   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:13.766752   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:14.075409   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:14.267078   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:14.575780   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:14.766105   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:15.076026   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:15.301901   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:15.364418   17609 pod_ready.go:103] pod "coredns-7c65d6cfc9-7m4lj" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:15.575817   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:15.767762   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:16.075960   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:16.267233   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:16.577969   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:16.767221   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:16.792767   17609 pod_ready.go:93] pod "coredns-7c65d6cfc9-7m4lj" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:16.792793   17609 pod_ready.go:82] duration metric: took 27.505098882s for pod "coredns-7c65d6cfc9-7m4lj" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:16.792807   17609 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-135472" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:16.796962   17609 pod_ready.go:93] pod "etcd-addons-135472" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:16.796979   17609 pod_ready.go:82] duration metric: took 4.16436ms for pod "etcd-addons-135472" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:16.796988   17609 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-135472" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:16.800356   17609 pod_ready.go:93] pod "kube-apiserver-addons-135472" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:16.800372   17609 pod_ready.go:82] duration metric: took 3.378557ms for pod "kube-apiserver-addons-135472" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:16.800379   17609 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-135472" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:16.804029   17609 pod_ready.go:93] pod "kube-controller-manager-addons-135472" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:16.804045   17609 pod_ready.go:82] duration metric: took 3.65981ms for pod "kube-controller-manager-addons-135472" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:16.804052   17609 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dldq9" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:16.807463   17609 pod_ready.go:93] pod "kube-proxy-dldq9" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:16.807486   17609 pod_ready.go:82] duration metric: took 3.426976ms for pod "kube-proxy-dldq9" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:16.807498   17609 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-135472" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:17.074606   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:17.191157   17609 pod_ready.go:93] pod "kube-scheduler-addons-135472" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:17.191179   17609 pod_ready.go:82] duration metric: took 383.668405ms for pod "kube-scheduler-addons-135472" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:17.191193   17609 pod_ready.go:39] duration metric: took 40.919342338s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 20:49:17.191211   17609 api_server.go:52] waiting for apiserver process to appear ...
	I0920 20:49:17.191252   17609 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:49:17.204500   17609 api_server.go:72] duration metric: took 42.639064821s to wait for apiserver process to appear ...
	I0920 20:49:17.204520   17609 api_server.go:88] waiting for apiserver healthz status ...
	I0920 20:49:17.204536   17609 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 20:49:17.208581   17609 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0920 20:49:17.209260   17609 api_server.go:141] control plane version: v1.31.1
	I0920 20:49:17.209279   17609 api_server.go:131] duration metric: took 4.753775ms to wait for apiserver health ...
	I0920 20:49:17.209286   17609 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 20:49:17.267206   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:17.397686   17609 system_pods.go:59] 17 kube-system pods found
	I0920 20:49:17.397718   17609 system_pods.go:61] "coredns-7c65d6cfc9-7m4lj" [6863697a-ab69-4209-927b-5e01cf8662c7] Running
	I0920 20:49:17.397729   17609 system_pods.go:61] "csi-hostpath-attacher-0" [c1e886ed-1aed-4f03-9e05-9e66b3718c86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 20:49:17.397738   17609 system_pods.go:61] "csi-hostpath-resizer-0" [2471632c-09f9-46f2-aaef-f5cf1e545204] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 20:49:17.397749   17609 system_pods.go:61] "csi-hostpathplugin-g6nz5" [ecd4b7e0-fa89-4066-a44c-68c63ed08848] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 20:49:17.397756   17609 system_pods.go:61] "etcd-addons-135472" [68f0a726-c9cf-44a9-bf2d-7b7f27e16a35] Running
	I0920 20:49:17.397762   17609 system_pods.go:61] "kube-apiserver-addons-135472" [94bb49d1-b91f-46b7-a3aa-4feb05bc246c] Running
	I0920 20:49:17.397767   17609 system_pods.go:61] "kube-controller-manager-addons-135472" [d96a7e2e-0108-4c99-a8c1-5778b9337615] Running
	I0920 20:49:17.397776   17609 system_pods.go:61] "kube-ingress-dns-minikube" [e6ce2cf8-833a-4dd3-8adb-39ec6ff7d3a8] Running
	I0920 20:49:17.397783   17609 system_pods.go:61] "kube-proxy-dldq9" [04a9e54c-a3de-4a38-9a6d-3fe04c7a3b0a] Running
	I0920 20:49:17.397792   17609 system_pods.go:61] "kube-scheduler-addons-135472" [e0299e25-06ee-4577-8908-e7a8b13114c7] Running
	I0920 20:49:17.397800   17609 system_pods.go:61] "metrics-server-84c5f94fbc-6j7n7" [0f128981-52f4-40ca-a230-ae60a04056dd] Running
	I0920 20:49:17.397808   17609 system_pods.go:61] "nvidia-device-plugin-daemonset-nc7bn" [1891b171-e5c4-4a39-bf97-52c73162793d] Running
	I0920 20:49:17.397813   17609 system_pods.go:61] "registry-66c9cd494c-n8x7q" [ab1423e5-b667-4a7f-96f5-061bb4596eeb] Running
	I0920 20:49:17.397830   17609 system_pods.go:61] "registry-proxy-8z8jc" [db35f6da-74a1-46c1-8ca2-4c7e51bf1986] Running
	I0920 20:49:17.397841   17609 system_pods.go:61] "snapshot-controller-56fcc65765-fn8md" [edd76721-3199-440d-bd9a-5b1ab6bd27e9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:17.397849   17609 system_pods.go:61] "snapshot-controller-56fcc65765-zrr94" [ec3d66ea-b501-4ed0-8a23-94005c56a42e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:17.397859   17609 system_pods.go:61] "storage-provisioner" [610990bc-6df6-45dc-99ee-7471b9ecd2dc] Running
	I0920 20:49:17.397866   17609 system_pods.go:74] duration metric: took 188.574224ms to wait for pod list to return data ...
	I0920 20:49:17.397875   17609 default_sa.go:34] waiting for default service account to be created ...
	I0920 20:49:17.576129   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:17.591064   17609 default_sa.go:45] found service account: "default"
	I0920 20:49:17.591085   17609 default_sa.go:55] duration metric: took 193.203256ms for default service account to be created ...
	I0920 20:49:17.591096   17609 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 20:49:17.766801   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:17.820039   17609 system_pods.go:86] 17 kube-system pods found
	I0920 20:49:17.820074   17609 system_pods.go:89] "coredns-7c65d6cfc9-7m4lj" [6863697a-ab69-4209-927b-5e01cf8662c7] Running
	I0920 20:49:17.820085   17609 system_pods.go:89] "csi-hostpath-attacher-0" [c1e886ed-1aed-4f03-9e05-9e66b3718c86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 20:49:17.820091   17609 system_pods.go:89] "csi-hostpath-resizer-0" [2471632c-09f9-46f2-aaef-f5cf1e545204] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 20:49:17.820099   17609 system_pods.go:89] "csi-hostpathplugin-g6nz5" [ecd4b7e0-fa89-4066-a44c-68c63ed08848] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 20:49:17.820104   17609 system_pods.go:89] "etcd-addons-135472" [68f0a726-c9cf-44a9-bf2d-7b7f27e16a35] Running
	I0920 20:49:17.820109   17609 system_pods.go:89] "kube-apiserver-addons-135472" [94bb49d1-b91f-46b7-a3aa-4feb05bc246c] Running
	I0920 20:49:17.820113   17609 system_pods.go:89] "kube-controller-manager-addons-135472" [d96a7e2e-0108-4c99-a8c1-5778b9337615] Running
	I0920 20:49:17.820119   17609 system_pods.go:89] "kube-ingress-dns-minikube" [e6ce2cf8-833a-4dd3-8adb-39ec6ff7d3a8] Running
	I0920 20:49:17.820123   17609 system_pods.go:89] "kube-proxy-dldq9" [04a9e54c-a3de-4a38-9a6d-3fe04c7a3b0a] Running
	I0920 20:49:17.820129   17609 system_pods.go:89] "kube-scheduler-addons-135472" [e0299e25-06ee-4577-8908-e7a8b13114c7] Running
	I0920 20:49:17.820135   17609 system_pods.go:89] "metrics-server-84c5f94fbc-6j7n7" [0f128981-52f4-40ca-a230-ae60a04056dd] Running
	I0920 20:49:17.820140   17609 system_pods.go:89] "nvidia-device-plugin-daemonset-nc7bn" [1891b171-e5c4-4a39-bf97-52c73162793d] Running
	I0920 20:49:17.820146   17609 system_pods.go:89] "registry-66c9cd494c-n8x7q" [ab1423e5-b667-4a7f-96f5-061bb4596eeb] Running
	I0920 20:49:17.820150   17609 system_pods.go:89] "registry-proxy-8z8jc" [db35f6da-74a1-46c1-8ca2-4c7e51bf1986] Running
	I0920 20:49:17.820159   17609 system_pods.go:89] "snapshot-controller-56fcc65765-fn8md" [edd76721-3199-440d-bd9a-5b1ab6bd27e9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:17.820171   17609 system_pods.go:89] "snapshot-controller-56fcc65765-zrr94" [ec3d66ea-b501-4ed0-8a23-94005c56a42e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:17.820177   17609 system_pods.go:89] "storage-provisioner" [610990bc-6df6-45dc-99ee-7471b9ecd2dc] Running
	I0920 20:49:17.820187   17609 system_pods.go:126] duration metric: took 229.085318ms to wait for k8s-apps to be running ...
	I0920 20:49:17.820196   17609 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 20:49:17.820239   17609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 20:49:17.830815   17609 system_svc.go:56] duration metric: took 10.611067ms WaitForService to wait for kubelet
	I0920 20:49:17.830840   17609 kubeadm.go:582] duration metric: took 43.265404167s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 20:49:17.830861   17609 node_conditions.go:102] verifying NodePressure condition ...
	I0920 20:49:17.992021   17609 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0920 20:49:17.992050   17609 node_conditions.go:123] node cpu capacity is 8
	I0920 20:49:17.992063   17609 node_conditions.go:105] duration metric: took 161.197221ms to run NodePressure ...
	I0920 20:49:17.992078   17609 start.go:241] waiting for startup goroutines ...
	I0920 20:49:18.074955   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:18.267568   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:18.575707   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:18.766561   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:19.123896   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:19.267155   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:19.575562   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:19.766143   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:20.076020   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:20.266570   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:20.574969   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:20.766751   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:21.077222   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:21.267356   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:21.576063   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:21.766903   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:22.074573   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:22.266551   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:22.575986   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:22.766600   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:23.075162   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:23.267023   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:23.575784   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:23.766914   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:24.077671   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:24.266590   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:24.575860   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:24.766881   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:25.075931   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:25.267428   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:25.575596   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:25.767846   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:26.078315   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:26.266370   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:26.574730   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:26.765974   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:27.075562   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:27.266853   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:27.575449   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:27.766570   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:28.075540   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:28.267032   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:28.576040   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:28.766136   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:29.075375   17609 kapi.go:107] duration metric: took 41.504002981s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 20:49:29.266299   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:29.765934   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:30.265813   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:30.766527   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:31.266311   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:31.766746   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:32.266507   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:32.766153   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:33.266211   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:33.766498   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:34.266219   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:34.766532   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:35.265827   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:35.766472   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:36.265950   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:36.766395   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:37.266246   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:37.766414   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:38.265587   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:38.766044   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:39.266340   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:39.766163   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:40.266140   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:40.766461   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:41.265912   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:41.766604   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:42.267056   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:42.766395   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:43.266707   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:43.766218   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:44.266544   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:44.766342   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:45.266301   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:45.766380   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:46.266575   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:46.766321   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:47.266154   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:47.766681   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:48.266924   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:48.767354   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:49.267018   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:49.766703   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:50.266938   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:50.767742   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:51.267605   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:51.767035   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:52.266538   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:52.766497   17609 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:53.266510   17609 kapi.go:107] duration metric: took 1m7.503740199s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 20:50:12.404405   17609 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 20:50:12.404429   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:12.905384   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:13.403940   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:13.904268   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:14.404679   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:14.904750   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:15.403988   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:15.904983   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:16.404924   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:16.904634   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:17.403611   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:17.903735   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:18.404201   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:18.904625   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:19.403950   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:19.905585   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:20.403874   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:20.904139   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:21.405508   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:21.903752   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:22.404328   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:22.905014   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:23.404436   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:23.904100   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:24.404858   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:24.904832   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:25.404370   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:25.903894   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:26.404645   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:26.903960   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:27.404342   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:27.905046   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:28.406252   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:28.903944   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:29.404535   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:29.904106   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:30.404694   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:30.904193   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:31.405032   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:31.904511   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:32.404071   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:32.904858   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:33.404477   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:33.904207   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:34.405057   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:34.904119   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:35.404754   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:35.903725   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:36.404054   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:36.904551   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:37.403833   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:37.904382   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:38.404899   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:38.904573   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:39.403981   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:39.905291   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:40.405008   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:40.904892   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:41.404591   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:41.903907   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:42.404350   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:42.904099   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:43.405149   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:43.904783   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:44.403881   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:44.904973   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:45.404535   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:45.904308   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:46.404977   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:46.904410   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:47.404604   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:47.903913   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:48.404448   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:48.904023   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:49.405768   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:49.904742   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:50.404024   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:50.904861   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:51.404344   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:51.904054   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:52.404559   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:52.904561   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:53.404528   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:53.903788   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:54.403978   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:54.904523   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:55.404246   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:55.904402   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:56.403883   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:56.904705   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:57.403951   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:57.904220   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:58.404888   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:58.904593   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:59.403915   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:59.904502   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:00.404057   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:00.904747   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:01.404389   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:01.903815   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:02.404755   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:02.904126   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:03.404984   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:03.904389   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:04.403934   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:04.905087   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:05.404766   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:05.904568   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:06.404856   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:06.903655   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:07.404295   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:07.904476   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:08.404088   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:08.904620   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:09.403888   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:09.905014   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:10.404591   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:10.904085   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:11.404888   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:11.904285   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:12.403843   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:12.903931   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:13.404477   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:13.904131   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:14.404769   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:14.904671   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:15.403989   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:15.904854   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:16.404722   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:16.904452   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:17.405172   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:17.905318   17609 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:51:18.403845   17609 kapi.go:107] duration metric: took 2m29.502411524s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 20:51:18.405198   17609 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-135472 cluster.
	I0920 20:51:18.406500   17609 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 20:51:18.407689   17609 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 20:51:18.409009   17609 out.go:177] * Enabled addons: nvidia-device-plugin, volcano, ingress-dns, cloud-spanner, storage-provisioner, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 20:51:18.410236   17609 addons.go:510] duration metric: took 2m43.844753984s for enable addons: enabled=[nvidia-device-plugin volcano ingress-dns cloud-spanner storage-provisioner metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 20:51:18.410273   17609 start.go:246] waiting for cluster config update ...
	I0920 20:51:18.410297   17609 start.go:255] writing updated cluster config ...
	I0920 20:51:18.410542   17609 ssh_runner.go:195] Run: rm -f paused
	I0920 20:51:18.457244   17609 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 20:51:18.458650   17609 out.go:177] * Done! kubectl is now configured to use "addons-135472" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 20 21:00:40 addons-135472 dockerd[1342]: time="2024-09-20T21:00:40.082316704Z" level=info msg="ignoring event" container=3416c221be5ffe8e72a88e8b9e0650f5ca11fc02e4fa48190fba3e0bc9c874fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:00:40 addons-135472 dockerd[1342]: time="2024-09-20T21:00:40.082370825Z" level=info msg="ignoring event" container=3dd570a20fa88d6b7085cad1e4bdd10fc57d98cdf3d23929a8a61ff249d7e70e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:00:40 addons-135472 dockerd[1342]: time="2024-09-20T21:00:40.160707821Z" level=info msg="ignoring event" container=d3b6af70bf62eb597596b0b8ca0f4a71cfd8f28edf33e8c124975419c2f25398 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:00:40 addons-135472 dockerd[1342]: time="2024-09-20T21:00:40.169738192Z" level=info msg="ignoring event" container=44a8cbf55922ff942882512e51f4c7b452e8bac6f37678ee5d93cf570b5e1523 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:00:40 addons-135472 dockerd[1342]: time="2024-09-20T21:00:40.170741349Z" level=info msg="ignoring event" container=82a8eb43f0a917f38f7cae504a87c4bb24bf8666ad72291f25f02afe2c47a798 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:00:40 addons-135472 dockerd[1342]: time="2024-09-20T21:00:40.173061067Z" level=info msg="ignoring event" container=05a3b1bf7827e9832f874b7165cd7546fc4fca2b3e5f64b7a1a26d78011a59b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:00:40 addons-135472 dockerd[1342]: time="2024-09-20T21:00:40.184077267Z" level=info msg="ignoring event" container=9ef7aa4bdb1d976420a088e62ed425779e6b7b97fb8fd7d4951a764048643c4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:00:40 addons-135472 dockerd[1342]: time="2024-09-20T21:00:40.791492498Z" level=info msg="ignoring event" container=7f00d1eef828c05468b79ce16832617bdf71250d445733b2857790a0d37e3e70 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:00:40 addons-135472 dockerd[1342]: time="2024-09-20T21:00:40.805745811Z" level=info msg="ignoring event" container=091c613362bf17cd23ff605967987c32c3148fdcfd3d214f085396f97bcc6a57 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:00:40 addons-135472 dockerd[1342]: time="2024-09-20T21:00:40.875650571Z" level=info msg="ignoring event" container=6709086385f0eff837a09e42c4738f5b20b434a61ddb44faf997107366d046c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:00:44 addons-135472 dockerd[1342]: time="2024-09-20T21:00:44.094842200Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=d5aa45762762da292a19bb27e5536921054810a397ad6cec4fa0e74b92c8b2d9 spanID=42c0330ef9ba6afa traceID=40bf797b57d65f34a10dee9ca7d9c8fa
	Sep 20 21:00:44 addons-135472 dockerd[1342]: time="2024-09-20T21:00:44.155342794Z" level=info msg="ignoring event" container=d5aa45762762da292a19bb27e5536921054810a397ad6cec4fa0e74b92c8b2d9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:00:44 addons-135472 cri-dockerd[1608]: time="2024-09-20T21:00:44Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"ingress-nginx-controller-bc57996ff-s8phk_ingress-nginx\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 20 21:00:44 addons-135472 dockerd[1342]: time="2024-09-20T21:00:44.299536103Z" level=info msg="ignoring event" container=748b52dd51b500d4f7c30d6d13da3b101f2adaed5cca0c9dee9a6dd57dc7bf85 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:00:46 addons-135472 dockerd[1342]: time="2024-09-20T21:00:46.371481499Z" level=info msg="ignoring event" container=a66b2384155c92184f0a4508370d03e8fd8a539515d0c688c129a0c9cb52144a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:00:46 addons-135472 dockerd[1342]: time="2024-09-20T21:00:46.372821446Z" level=info msg="ignoring event" container=07c696666a296d6c40c9b8e1991e653d6bad920ad15d2e09cfdfed969cfe9560 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:00:46 addons-135472 dockerd[1342]: time="2024-09-20T21:00:46.547456284Z" level=info msg="ignoring event" container=ffe65bfbcb0ef9f723a71a57dbec76888af6efdd81dbf4b65a5480fbbbccc2db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:00:46 addons-135472 dockerd[1342]: time="2024-09-20T21:00:46.569313545Z" level=info msg="ignoring event" container=234433d87febf06e7d2df942b3c647f500b842f1f86e6b03d913cb50c642aac0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:00:49 addons-135472 dockerd[1342]: time="2024-09-20T21:00:49.846450557Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=cc40d4310e127e80 traceID=32183bc587e51af6903141c7808b55e4
	Sep 20 21:00:49 addons-135472 dockerd[1342]: time="2024-09-20T21:00:49.848068722Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=cc40d4310e127e80 traceID=32183bc587e51af6903141c7808b55e4
	Sep 20 21:01:09 addons-135472 dockerd[1342]: time="2024-09-20T21:01:09.643581076Z" level=info msg="ignoring event" container=6641c319a32596647247533456772d6a0d4089f23b7aab8dbde2342175a4a136 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:01:10 addons-135472 dockerd[1342]: time="2024-09-20T21:01:10.086896514Z" level=info msg="ignoring event" container=24b3d0c671515d361568eaad383b4db480e7769618191129552d7c3bfad73ed0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:01:10 addons-135472 dockerd[1342]: time="2024-09-20T21:01:10.171136172Z" level=info msg="ignoring event" container=0041c6fb54cc137712d2058d0fe6c716d26f9ac789549b6f9d7bbc62fc1134fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:01:10 addons-135472 dockerd[1342]: time="2024-09-20T21:01:10.224939270Z" level=info msg="ignoring event" container=ab9e92e6feaa1cf4fec9ce691b379d0b46c7258d4210549832c58437f61b19e9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:01:10 addons-135472 dockerd[1342]: time="2024-09-20T21:01:10.313408675Z" level=info msg="ignoring event" container=59a42f110ab6128e8493a4ed4537ceb939f6fcda3ee046cfdec51700e21f9685 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2da9301ea32c7       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  31 seconds ago      Running             hello-world-app           0                   6e2065431b2ab       hello-world-app-55bf9c44b4-6gd9l
	17a7916bd4793       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                38 seconds ago      Running             nginx                     0                   b5b4651ace8eb       nginx
	ef3621a95b76a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   3b06f3f4cf97f       gcp-auth-89d5ffd79-89jmh
	2614224e15df0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   a00185430995f       ingress-nginx-admission-patch-bdkdp
	b4dab13c6b5be       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   4f9dc6e057bef       ingress-nginx-admission-create-72gkj
	0f82d95cccb42       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner    0                   a6c0e047973fe       local-path-provisioner-86d989889c-mj2z7
	1a277a4906625       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   bd433ed5b94b2       storage-provisioner
	d4549c22b98ac       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                   0                   f906d9e567ff7       coredns-7c65d6cfc9-7m4lj
	0d0804c9fcbff       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                0                   3c52512d3c308       kube-proxy-dldq9
	f85fc9ae7c205       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver            0                   c1d70a89071f9       kube-apiserver-addons-135472
	0817a42f2a24e       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler            0                   dd09603121160       kube-scheduler-addons-135472
	d228da92f9f07       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   102b3750a64c9       etcd-addons-135472
	cdff8f7c2b923       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   9e61c84a88451       kube-controller-manager-addons-135472
	
	
	==> coredns [d4549c22b98a] <==
	[INFO] 10.244.0.21:32891 - 51378 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00594696s
	[INFO] 10.244.0.21:44163 - 30159 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000069706s
	[INFO] 10.244.0.21:34449 - 26402 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.010218436s
	[INFO] 10.244.0.21:56418 - 50882 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.001568911s
	[INFO] 10.244.0.21:38275 - 2755 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.002950694s
	[INFO] 10.244.0.21:41210 - 16321 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.010397775s
	[INFO] 10.244.0.21:32891 - 48383 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.002718676s
	[INFO] 10.244.0.21:36523 - 42172 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.010771909s
	[INFO] 10.244.0.21:41781 - 53445 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00299836s
	[INFO] 10.244.0.21:38865 - 9563 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003069156s
	[INFO] 10.244.0.21:56446 - 966 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.010993413s
	[INFO] 10.244.0.21:34449 - 7986 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000243639s
	[INFO] 10.244.0.21:36523 - 60842 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000078574s
	[INFO] 10.244.0.21:32891 - 3257 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000050826s
	[INFO] 10.244.0.21:38057 - 19560 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.002638223s
	[INFO] 10.244.0.21:56446 - 63068 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000064117s
	[INFO] 10.244.0.21:39362 - 17236 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004764669s
	[INFO] 10.244.0.21:41781 - 1814 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000082504s
	[INFO] 10.244.0.21:38275 - 62761 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00004945s
	[INFO] 10.244.0.21:38865 - 64813 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000059476s
	[INFO] 10.244.0.21:56418 - 29113 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000044093s
	[INFO] 10.244.0.21:41210 - 19662 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000195825s
	[INFO] 10.244.0.21:38057 - 58879 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053766s
	[INFO] 10.244.0.21:39362 - 23095 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00475268s
	[INFO] 10.244.0.21:39362 - 25605 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00008046s
	
	
	==> describe nodes <==
	Name:               addons-135472
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-135472
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=addons-135472
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T20_48_30_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-135472
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 20:48:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-135472
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:01:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:01:06 +0000   Fri, 20 Sep 2024 20:48:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:01:06 +0000   Fri, 20 Sep 2024 20:48:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:01:06 +0000   Fri, 20 Sep 2024 20:48:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:01:06 +0000   Fri, 20 Sep 2024 20:48:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-135472
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 94a5a244c6f24084b87142a7c0652553
	  System UUID:                63f110a0-0a70-4a3b-8138-a16e3d9aa477
	  Boot ID:                    f541ecf7-517e-485c-8b68-8f94d94b6d3f
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     hello-world-app-55bf9c44b4-6gd9l           0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  gcp-auth                    gcp-auth-89d5ffd79-89jmh                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-7m4lj                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-135472                         100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-135472               250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-135472      200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-dldq9                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-135472               100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-mj2z7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-135472 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x7 over 12m)  kubelet          Node addons-135472 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-135472 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-135472 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-135472 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-135472 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-135472 event: Registered Node addons-135472 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 70 77 6b 65 91 08 06
	[  +1.633603] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 b6 b7 08 60 16 08 06
	[  +5.135270] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 4a dd 84 01 4d 08 06
	[  +0.590819] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 49 1b bc 81 d2 08 06
	[  +0.088986] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 4a 9f fa 69 cd 08 06
	[ +25.450389] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 cf 16 3a 98 6a 08 06
	[  +1.010954] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 f0 b4 01 c9 ac 08 06
	[Sep20 20:50] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 74 43 84 da ff 08 06
	[  +0.009410] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe e5 20 aa ce 99 08 06
	[Sep20 20:51] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 72 f2 79 4b 13 08 06
	[  +0.000491] IPv4: martian source 10.244.0.25 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2e 31 fd 29 84 28 08 06
	[Sep20 21:00] IPv4: martian source 10.244.0.35 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 cf 16 3a 98 6a 08 06
	[  +1.858654] IPv4: martian source 10.244.0.21 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e 31 fd 29 84 28 08 06
	
	
	==> etcd [d228da92f9f0] <==
	{"level":"info","ts":"2024-09-20T20:48:25.584117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-20T20:48:25.584131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T20:48:25.584956Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T20:48:25.585602Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-135472 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T20:48:25.585607Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T20:48:25.585634Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T20:48:25.585840Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T20:48:25.585869Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T20:48:25.586319Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T20:48:25.586385Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T20:48:25.586405Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T20:48:25.586767Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T20:48:25.587035Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T20:48:25.587925Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-20T20:48:25.588092Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T20:49:10.631392Z","caller":"traceutil/trace.go:171","msg":"trace[1058944811] transaction","detail":"{read_only:false; response_revision:1068; number_of_response:1; }","duration":"123.049172ms","start":"2024-09-20T20:49:10.508323Z","end":"2024-09-20T20:49:10.631372Z","steps":["trace[1058944811] 'process raft request'  (duration: 56.658154ms)","trace[1058944811] 'compare'  (duration: 66.318188ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T20:49:12.678779Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.695085ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T20:49:12.678873Z","caller":"traceutil/trace.go:171","msg":"trace[1110733045] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1075; }","duration":"102.806898ms","start":"2024-09-20T20:49:12.576053Z","end":"2024-09-20T20:49:12.678860Z","steps":["trace[1110733045] 'range keys from in-memory index tree'  (duration: 102.655162ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:49:12.678758Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.921167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T20:49:12.678965Z","caller":"traceutil/trace.go:171","msg":"trace[1118587257] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1075; }","duration":"106.164017ms","start":"2024-09-20T20:49:12.572787Z","end":"2024-09-20T20:49:12.678951Z","steps":["trace[1118587257] 'range keys from in-memory index tree'  (duration: 105.885963ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:49:52.088930Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.121626ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032030038373407 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-certs-create.17f70ede062708ce\" mod_revision:1141 > success:<request_put:<key:\"/registry/events/gcp-auth/gcp-auth-certs-create.17f70ede062708ce\" value_size:819 lease:8128032030038373060 >> failure:<request_range:<key:\"/registry/events/gcp-auth/gcp-auth-certs-create.17f70ede062708ce\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-20T20:49:52.089020Z","caller":"traceutil/trace.go:171","msg":"trace[33980969] transaction","detail":"{read_only:false; response_revision:1270; number_of_response:1; }","duration":"168.922468ms","start":"2024-09-20T20:49:51.920083Z","end":"2024-09-20T20:49:52.089005Z","steps":["trace[33980969] 'process raft request'  (duration: 53.276544ms)","trace[33980969] 'compare'  (duration: 115.042022ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T20:58:26.578734Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1895}
	{"level":"info","ts":"2024-09-20T20:58:26.601591Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1895,"took":"22.37242ms","hash":785413254,"current-db-size-bytes":8921088,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4964352,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-20T20:58:26.601625Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":785413254,"revision":1895,"compact-revision":-1}
	
	
	==> gcp-auth [ef3621a95b76] <==
	2024/09/20 20:51:56 Ready to write response ...
	2024/09/20 20:51:56 Ready to marshal response ...
	2024/09/20 20:51:56 Ready to write response ...
	2024/09/20 21:00:09 Ready to marshal response ...
	2024/09/20 21:00:09 Ready to write response ...
	2024/09/20 21:00:09 Ready to marshal response ...
	2024/09/20 21:00:09 Ready to write response ...
	2024/09/20 21:00:09 Ready to marshal response ...
	2024/09/20 21:00:09 Ready to write response ...
	2024/09/20 21:00:11 Ready to marshal response ...
	2024/09/20 21:00:11 Ready to write response ...
	2024/09/20 21:00:15 Ready to marshal response ...
	2024/09/20 21:00:15 Ready to write response ...
	2024/09/20 21:00:15 Ready to marshal response ...
	2024/09/20 21:00:15 Ready to write response ...
	2024/09/20 21:00:15 Ready to marshal response ...
	2024/09/20 21:00:15 Ready to write response ...
	2024/09/20 21:00:18 Ready to marshal response ...
	2024/09/20 21:00:18 Ready to write response ...
	2024/09/20 21:00:29 Ready to marshal response ...
	2024/09/20 21:00:29 Ready to write response ...
	2024/09/20 21:00:30 Ready to marshal response ...
	2024/09/20 21:00:30 Ready to write response ...
	2024/09/20 21:00:38 Ready to marshal response ...
	2024/09/20 21:00:38 Ready to write response ...
	
	
	==> kernel <==
	 21:01:11 up 43 min,  0 users,  load average: 0.54, 0.33, 0.27
	Linux addons-135472 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [f85fc9ae7c20] <==
	W0920 20:51:47.973469       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0920 20:51:48.076606       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0920 20:51:48.275507       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0920 20:51:48.585002       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0920 21:00:08.990734       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0920 21:00:15.675670       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.43.158"}
	I0920 21:00:18.805839       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0920 21:00:24.475924       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 21:00:25.490955       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 21:00:29.917946       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 21:00:30.265919       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.155.243"}
	I0920 21:00:38.731790       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.173.68"}
	I0920 21:00:46.228880       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 21:00:46.228934       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 21:00:46.239965       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 21:00:46.240011       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 21:00:46.242848       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 21:00:46.242892       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 21:00:46.260323       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 21:00:46.260371       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 21:00:46.269172       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 21:00:46.269204       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0920 21:00:47.243666       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0920 21:00:47.269717       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0920 21:00:47.279403       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [cdff8f7c2b92] <==
	E0920 21:00:52.768105       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:00:53.786206       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:00:53.786240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:00:57.079939       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:00:57.079979       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:00:57.322383       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:00:57.322416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:00:57.533621       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:00:57.533658       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:01:04.379028       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:01:04.379065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:01:04.629308       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:01:04.629346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 21:01:04.956401       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0920 21:01:04.956431       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 21:01:05.263208       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0920 21:01:05.263250       1 shared_informer.go:320] Caches are synced for garbage collector
	W0920 21:01:06.107708       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:01:06.107745       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 21:01:06.405383       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-135472"
	W0920 21:01:06.993320       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:01:06.993359       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:01:08.734381       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:01:08.734417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 21:01:10.054970       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="8.322µs"
	
	
	==> kube-proxy [0d0804c9fcbf] <==
	I0920 20:48:38.575560       1 server_linux.go:66] "Using iptables proxy"
	I0920 20:48:39.172955       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0920 20:48:39.173022       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 20:48:39.757807       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 20:48:39.757865       1 server_linux.go:169] "Using iptables Proxier"
	I0920 20:48:39.766777       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 20:48:39.767193       1 server.go:483] "Version info" version="v1.31.1"
	I0920 20:48:39.767217       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 20:48:39.769542       1 config.go:199] "Starting service config controller"
	I0920 20:48:39.769566       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 20:48:39.769590       1 config.go:105] "Starting endpoint slice config controller"
	I0920 20:48:39.769596       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 20:48:39.770003       1 config.go:328] "Starting node config controller"
	I0920 20:48:39.770009       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 20:48:39.870467       1 shared_informer.go:320] Caches are synced for node config
	I0920 20:48:39.870504       1 shared_informer.go:320] Caches are synced for service config
	I0920 20:48:39.870541       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0817a42f2a24] <==
	E0920 20:48:27.671246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0920 20:48:27.671442       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0920 20:48:27.671398       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0920 20:48:27.671449       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:27.671253       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 20:48:27.671589       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:27.671253       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 20:48:27.671632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:27.671401       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 20:48:27.671673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:28.483651       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 20:48:28.483682       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:28.495740       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 20:48:28.495764       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:28.567223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 20:48:28.567259       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:28.572523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 20:48:28.572558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:28.580719       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 20:48:28.580750       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:28.729016       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 20:48:28.729055       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:28.730008       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 20:48:28.730039       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 20:48:31.468803       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 21:00:54 addons-135472 kubelet[2444]: E0920 21:00:54.792438    2444 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="837e41f6-532d-44bd-9cbf-429c7ea7bd7d"
	Sep 20 21:01:02 addons-135472 kubelet[2444]: I0920 21:01:02.790936    2444 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-8z8jc" secret="" err="secret \"gcp-auth\" not found"
	Sep 20 21:01:02 addons-135472 kubelet[2444]: E0920 21:01:02.792762    2444 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="60dd3ab7-ff5f-4ccc-8fef-a1694f5efc9c"
	Sep 20 21:01:05 addons-135472 kubelet[2444]: I0920 21:01:05.790663    2444 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7c65d6cfc9-7m4lj" secret="" err="secret \"gcp-auth\" not found"
	Sep 20 21:01:06 addons-135472 kubelet[2444]: E0920 21:01:06.792331    2444 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="837e41f6-532d-44bd-9cbf-429c7ea7bd7d"
	Sep 20 21:01:09 addons-135472 kubelet[2444]: I0920 21:01:09.786246    2444 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mnsc\" (UniqueName: \"kubernetes.io/projected/60dd3ab7-ff5f-4ccc-8fef-a1694f5efc9c-kube-api-access-8mnsc\") pod \"60dd3ab7-ff5f-4ccc-8fef-a1694f5efc9c\" (UID: \"60dd3ab7-ff5f-4ccc-8fef-a1694f5efc9c\") "
	Sep 20 21:01:09 addons-135472 kubelet[2444]: I0920 21:01:09.786280    2444 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/60dd3ab7-ff5f-4ccc-8fef-a1694f5efc9c-gcp-creds\") pod \"60dd3ab7-ff5f-4ccc-8fef-a1694f5efc9c\" (UID: \"60dd3ab7-ff5f-4ccc-8fef-a1694f5efc9c\") "
	Sep 20 21:01:09 addons-135472 kubelet[2444]: I0920 21:01:09.786334    2444 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60dd3ab7-ff5f-4ccc-8fef-a1694f5efc9c-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "60dd3ab7-ff5f-4ccc-8fef-a1694f5efc9c" (UID: "60dd3ab7-ff5f-4ccc-8fef-a1694f5efc9c"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 21:01:09 addons-135472 kubelet[2444]: I0920 21:01:09.787811    2444 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60dd3ab7-ff5f-4ccc-8fef-a1694f5efc9c-kube-api-access-8mnsc" (OuterVolumeSpecName: "kube-api-access-8mnsc") pod "60dd3ab7-ff5f-4ccc-8fef-a1694f5efc9c" (UID: "60dd3ab7-ff5f-4ccc-8fef-a1694f5efc9c"). InnerVolumeSpecName "kube-api-access-8mnsc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 21:01:09 addons-135472 kubelet[2444]: I0920 21:01:09.886862    2444 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8mnsc\" (UniqueName: \"kubernetes.io/projected/60dd3ab7-ff5f-4ccc-8fef-a1694f5efc9c-kube-api-access-8mnsc\") on node \"addons-135472\" DevicePath \"\""
	Sep 20 21:01:09 addons-135472 kubelet[2444]: I0920 21:01:09.886896    2444 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/60dd3ab7-ff5f-4ccc-8fef-a1694f5efc9c-gcp-creds\") on node \"addons-135472\" DevicePath \"\""
	Sep 20 21:01:10 addons-135472 kubelet[2444]: I0920 21:01:10.390575    2444 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn7ch\" (UniqueName: \"kubernetes.io/projected/ab1423e5-b667-4a7f-96f5-061bb4596eeb-kube-api-access-wn7ch\") pod \"ab1423e5-b667-4a7f-96f5-061bb4596eeb\" (UID: \"ab1423e5-b667-4a7f-96f5-061bb4596eeb\") "
	Sep 20 21:01:10 addons-135472 kubelet[2444]: I0920 21:01:10.390636    2444 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw2r9\" (UniqueName: \"kubernetes.io/projected/db35f6da-74a1-46c1-8ca2-4c7e51bf1986-kube-api-access-vw2r9\") pod \"db35f6da-74a1-46c1-8ca2-4c7e51bf1986\" (UID: \"db35f6da-74a1-46c1-8ca2-4c7e51bf1986\") "
	Sep 20 21:01:10 addons-135472 kubelet[2444]: I0920 21:01:10.392350    2444 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab1423e5-b667-4a7f-96f5-061bb4596eeb-kube-api-access-wn7ch" (OuterVolumeSpecName: "kube-api-access-wn7ch") pod "ab1423e5-b667-4a7f-96f5-061bb4596eeb" (UID: "ab1423e5-b667-4a7f-96f5-061bb4596eeb"). InnerVolumeSpecName "kube-api-access-wn7ch". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 21:01:10 addons-135472 kubelet[2444]: I0920 21:01:10.392512    2444 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db35f6da-74a1-46c1-8ca2-4c7e51bf1986-kube-api-access-vw2r9" (OuterVolumeSpecName: "kube-api-access-vw2r9") pod "db35f6da-74a1-46c1-8ca2-4c7e51bf1986" (UID: "db35f6da-74a1-46c1-8ca2-4c7e51bf1986"). InnerVolumeSpecName "kube-api-access-vw2r9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 21:01:10 addons-135472 kubelet[2444]: I0920 21:01:10.491129    2444 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wn7ch\" (UniqueName: \"kubernetes.io/projected/ab1423e5-b667-4a7f-96f5-061bb4596eeb-kube-api-access-wn7ch\") on node \"addons-135472\" DevicePath \"\""
	Sep 20 21:01:10 addons-135472 kubelet[2444]: I0920 21:01:10.491162    2444 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vw2r9\" (UniqueName: \"kubernetes.io/projected/db35f6da-74a1-46c1-8ca2-4c7e51bf1986-kube-api-access-vw2r9\") on node \"addons-135472\" DevicePath \"\""
	Sep 20 21:01:10 addons-135472 kubelet[2444]: I0920 21:01:10.603714    2444 scope.go:117] "RemoveContainer" containerID="24b3d0c671515d361568eaad383b4db480e7769618191129552d7c3bfad73ed0"
	Sep 20 21:01:10 addons-135472 kubelet[2444]: I0920 21:01:10.620070    2444 scope.go:117] "RemoveContainer" containerID="24b3d0c671515d361568eaad383b4db480e7769618191129552d7c3bfad73ed0"
	Sep 20 21:01:10 addons-135472 kubelet[2444]: E0920 21:01:10.620730    2444 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 24b3d0c671515d361568eaad383b4db480e7769618191129552d7c3bfad73ed0" containerID="24b3d0c671515d361568eaad383b4db480e7769618191129552d7c3bfad73ed0"
	Sep 20 21:01:10 addons-135472 kubelet[2444]: I0920 21:01:10.620770    2444 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"24b3d0c671515d361568eaad383b4db480e7769618191129552d7c3bfad73ed0"} err="failed to get container status \"24b3d0c671515d361568eaad383b4db480e7769618191129552d7c3bfad73ed0\": rpc error: code = Unknown desc = Error response from daemon: No such container: 24b3d0c671515d361568eaad383b4db480e7769618191129552d7c3bfad73ed0"
	Sep 20 21:01:10 addons-135472 kubelet[2444]: I0920 21:01:10.620796    2444 scope.go:117] "RemoveContainer" containerID="0041c6fb54cc137712d2058d0fe6c716d26f9ac789549b6f9d7bbc62fc1134fd"
	Sep 20 21:01:10 addons-135472 kubelet[2444]: I0920 21:01:10.636288    2444 scope.go:117] "RemoveContainer" containerID="0041c6fb54cc137712d2058d0fe6c716d26f9ac789549b6f9d7bbc62fc1134fd"
	Sep 20 21:01:10 addons-135472 kubelet[2444]: E0920 21:01:10.636895    2444 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 0041c6fb54cc137712d2058d0fe6c716d26f9ac789549b6f9d7bbc62fc1134fd" containerID="0041c6fb54cc137712d2058d0fe6c716d26f9ac789549b6f9d7bbc62fc1134fd"
	Sep 20 21:01:10 addons-135472 kubelet[2444]: I0920 21:01:10.637046    2444 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"0041c6fb54cc137712d2058d0fe6c716d26f9ac789549b6f9d7bbc62fc1134fd"} err="failed to get container status \"0041c6fb54cc137712d2058d0fe6c716d26f9ac789549b6f9d7bbc62fc1134fd\": rpc error: code = Unknown desc = Error response from daemon: No such container: 0041c6fb54cc137712d2058d0fe6c716d26f9ac789549b6f9d7bbc62fc1134fd"
	
	
	==> storage-provisioner [1a277a490662] <==
	I0920 20:48:41.859972       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 20:48:41.883861       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 20:48:41.883942       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 20:48:41.966756       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 20:48:41.966964       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-135472_5f138205-f562-4d76-aeec-c671f8d59364!
	I0920 20:48:41.970074       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6370178b-d8be-414f-9c5a-715141d2a863", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-135472_5f138205-f562-4d76-aeec-c671f8d59364 became leader
	I0920 20:48:42.067133       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-135472_5f138205-f562-4d76-aeec-c671f8d59364!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-135472 -n addons-135472
helpers_test.go:261: (dbg) Run:  kubectl --context addons-135472 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-135472 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-135472 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-135472/192.168.49.2
	Start Time:       Fri, 20 Sep 2024 20:51:56 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-npb8r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-npb8r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m15s                  default-scheduler  Successfully assigned default/busybox to addons-135472
	  Normal   Pulling    7m56s (x4 over 9m15s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m56s (x4 over 9m15s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m56s (x4 over 9m15s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m33s (x6 over 9m14s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m3s (x21 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (73.30s)

                                                
                                    

Test pass (321/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.44
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.17
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.1/json-events 3.95
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.05
18 TestDownloadOnly/v1.31.1/DeleteAll 0.17
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.11
20 TestDownloadOnlyKic 0.93
21 TestBinaryMirror 0.73
22 TestOffline 90.97
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 208.91
29 TestAddons/serial/Volcano 37.66
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 18.49
35 TestAddons/parallel/InspektorGadget 10.56
36 TestAddons/parallel/MetricsServer 5.74
38 TestAddons/parallel/CSI 48.08
39 TestAddons/parallel/Headlamp 16.42
40 TestAddons/parallel/CloudSpanner 5.41
41 TestAddons/parallel/LocalPath 10.17
42 TestAddons/parallel/NvidiaDevicePlugin 5.44
43 TestAddons/parallel/Yakd 10.54
44 TestAddons/StoppedEnableDisable 11.02
45 TestCertOptions 23.28
46 TestCertExpiration 227.31
47 TestDockerFlags 29.35
48 TestForceSystemdFlag 26.49
49 TestForceSystemdEnv 25.18
51 TestKVMDriverInstallOrUpdate 1.21
55 TestErrorSpam/setup 19.84
56 TestErrorSpam/start 0.51
57 TestErrorSpam/status 0.81
58 TestErrorSpam/pause 1.07
59 TestErrorSpam/unpause 1.16
60 TestErrorSpam/stop 10.78
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 70.14
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 29.45
67 TestFunctional/serial/KubeContext 0.05
68 TestFunctional/serial/KubectlGetPods 0.07
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.34
72 TestFunctional/serial/CacheCmd/cache/add_local 0.63
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.17
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
80 TestFunctional/serial/ExtraConfig 37.49
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 0.87
83 TestFunctional/serial/LogsFileCmd 0.89
84 TestFunctional/serial/InvalidService 3.79
86 TestFunctional/parallel/ConfigCmd 0.31
87 TestFunctional/parallel/DashboardCmd 9.08
88 TestFunctional/parallel/DryRun 0.34
89 TestFunctional/parallel/InternationalLanguage 0.13
90 TestFunctional/parallel/StatusCmd 0.79
94 TestFunctional/parallel/ServiceCmdConnect 10.61
95 TestFunctional/parallel/AddonsCmd 0.19
96 TestFunctional/parallel/PersistentVolumeClaim 25.71
98 TestFunctional/parallel/SSHCmd 0.55
99 TestFunctional/parallel/CpCmd 1.9
100 TestFunctional/parallel/MySQL 23.19
101 TestFunctional/parallel/FileSync 0.28
102 TestFunctional/parallel/CertSync 1.81
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.28
110 TestFunctional/parallel/License 0.2
111 TestFunctional/parallel/Version/short 0.06
112 TestFunctional/parallel/Version/components 0.61
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
117 TestFunctional/parallel/ImageCommands/ImageBuild 3.58
118 TestFunctional/parallel/ImageCommands/Setup 0.43
119 TestFunctional/parallel/DockerEnv/bash 0.99
120 TestFunctional/parallel/ServiceCmd/DeployApp 10.2
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.01
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
123 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.83
124 TestFunctional/parallel/ProfileCmd/profile_list 0.43
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.04
127 TestFunctional/parallel/MountCmd/any-port 6.57
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.39
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.52
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.33
132 TestFunctional/parallel/MountCmd/specific-port 1.53
133 TestFunctional/parallel/ServiceCmd/List 0.97
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.87
135 TestFunctional/parallel/ServiceCmd/JSONOutput 1.3
136 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
137 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
138 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
140 TestFunctional/parallel/ServiceCmd/Format 0.41
141 TestFunctional/parallel/ServiceCmd/URL 0.49
143 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.41
144 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 17.25
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 96.96
160 TestMultiControlPlane/serial/DeployApp 4.75
161 TestMultiControlPlane/serial/PingHostFromPods 0.98
162 TestMultiControlPlane/serial/AddWorkerNode 19.53
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.79
165 TestMultiControlPlane/serial/CopyFile 14.65
166 TestMultiControlPlane/serial/StopSecondaryNode 11.32
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.63
168 TestMultiControlPlane/serial/RestartSecondaryNode 65.28
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.8
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 230.09
171 TestMultiControlPlane/serial/DeleteSecondaryNode 6.05
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.62
173 TestMultiControlPlane/serial/StopCluster 22.93
174 TestMultiControlPlane/serial/RestartCluster 88.7
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.63
176 TestMultiControlPlane/serial/AddSecondaryNode 37.42
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.8
180 TestImageBuild/serial/Setup 19.47
181 TestImageBuild/serial/NormalBuild 1.2
182 TestImageBuild/serial/BuildWithBuildArg 0.69
183 TestImageBuild/serial/BuildWithDockerIgnore 0.5
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.52
188 TestJSONOutput/start/Command 69.61
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.46
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.39
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 10.81
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.18
213 TestKicCustomNetwork/create_custom_network 22.04
214 TestKicCustomNetwork/use_default_bridge_network 22.83
215 TestKicExistingNetwork 21.77
216 TestKicCustomSubnet 22.48
217 TestKicStaticIP 23.61
218 TestMainNoArgs 0.04
219 TestMinikubeProfile 50.98
222 TestMountStart/serial/StartWithMountFirst 6.23
223 TestMountStart/serial/VerifyMountFirst 0.22
224 TestMountStart/serial/StartWithMountSecond 8.94
225 TestMountStart/serial/VerifyMountSecond 0.22
226 TestMountStart/serial/DeleteFirst 1.4
227 TestMountStart/serial/VerifyMountPostDelete 0.22
228 TestMountStart/serial/Stop 1.16
229 TestMountStart/serial/RestartStopped 7.49
230 TestMountStart/serial/VerifyMountPostStop 0.22
233 TestMultiNode/serial/FreshStart2Nodes 56.62
234 TestMultiNode/serial/DeployApp2Nodes 54.84
235 TestMultiNode/serial/PingHostFrom2Pods 0.65
236 TestMultiNode/serial/AddNode 16.71
237 TestMultiNode/serial/MultiNodeLabels 0.06
238 TestMultiNode/serial/ProfileList 0.58
239 TestMultiNode/serial/CopyFile 8.38
240 TestMultiNode/serial/StopNode 2.02
241 TestMultiNode/serial/StartAfterStop 9.5
242 TestMultiNode/serial/RestartKeepsNodes 96.7
243 TestMultiNode/serial/DeleteNode 5.08
244 TestMultiNode/serial/StopMultiNode 21.45
245 TestMultiNode/serial/RestartMultiNode 51.2
246 TestMultiNode/serial/ValidateNameConflict 23
251 TestPreload 94.27
253 TestScheduledStopUnix 93.7
254 TestSkaffold 96.64
256 TestInsufficientStorage 12.56
257 TestRunningBinaryUpgrade 100.25
259 TestKubernetesUpgrade 342.77
260 TestMissingContainerUpgrade 137.27
269 TestPause/serial/Start 65.24
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
272 TestNoKubernetes/serial/StartWithK8s 30.71
273 TestNoKubernetes/serial/StartWithStopK8s 6.74
285 TestNoKubernetes/serial/Start 11.06
286 TestStoppedBinaryUpgrade/Setup 0.55
287 TestStoppedBinaryUpgrade/Upgrade 61.57
288 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
289 TestNoKubernetes/serial/ProfileList 52.12
290 TestPause/serial/SecondStartNoReconfiguration 30.9
291 TestPause/serial/Pause 0.86
292 TestPause/serial/VerifyStatus 0.3
293 TestPause/serial/Unpause 0.44
294 TestPause/serial/PauseAgain 0.63
295 TestPause/serial/DeletePaused 2.06
296 TestPause/serial/VerifyDeletedResources 13.87
297 TestNoKubernetes/serial/Stop 1.17
298 TestNoKubernetes/serial/StartNoArgs 8.24
299 TestStoppedBinaryUpgrade/MinikubeLogs 1.12
300 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
302 TestStartStop/group/old-k8s-version/serial/FirstStart 104.31
304 TestStartStop/group/no-preload/serial/FirstStart 41.97
305 TestStartStop/group/no-preload/serial/DeployApp 9.24
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.73
307 TestStartStop/group/no-preload/serial/Stop 10.72
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
309 TestStartStop/group/no-preload/serial/SecondStart 297.33
311 TestStartStop/group/embed-certs/serial/FirstStart 34.38
312 TestStartStop/group/old-k8s-version/serial/DeployApp 8.38
313 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.75
314 TestStartStop/group/old-k8s-version/serial/Stop 10.8
315 TestStartStop/group/embed-certs/serial/DeployApp 8.22
316 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.89
317 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.16
318 TestStartStop/group/old-k8s-version/serial/SecondStart 140.29
319 TestStartStop/group/embed-certs/serial/Stop 11.52
320 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
321 TestStartStop/group/embed-certs/serial/SecondStart 299.92
323 TestStartStop/group/newest-cni/serial/FirstStart 29.79
324 TestStartStop/group/newest-cni/serial/DeployApp 0
325 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.83
326 TestStartStop/group/newest-cni/serial/Stop 10.72
327 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
328 TestStartStop/group/newest-cni/serial/SecondStart 14.01
329 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.2
332 TestStartStop/group/newest-cni/serial/Pause 2.29
334 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 61.54
335 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
336 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
337 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.19
338 TestStartStop/group/old-k8s-version/serial/Pause 2.26
339 TestNetworkPlugins/group/auto/Start 34.84
340 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.25
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.79
342 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.87
343 TestNetworkPlugins/group/auto/KubeletFlags 0.32
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
345 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 263.03
346 TestNetworkPlugins/group/auto/NetCatPod 10.2
347 TestNetworkPlugins/group/auto/DNS 0.15
348 TestNetworkPlugins/group/auto/Localhost 0.12
349 TestNetworkPlugins/group/auto/HairPin 0.11
350 TestNetworkPlugins/group/kindnet/Start 55.64
351 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
352 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
353 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
354 TestStartStop/group/no-preload/serial/Pause 2.42
355 TestNetworkPlugins/group/calico/Start 52.04
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
358 TestNetworkPlugins/group/kindnet/NetCatPod 10.18
359 TestNetworkPlugins/group/kindnet/DNS 0.15
360 TestNetworkPlugins/group/kindnet/Localhost 0.13
361 TestNetworkPlugins/group/kindnet/HairPin 0.11
362 TestNetworkPlugins/group/calico/ControllerPod 6.01
363 TestNetworkPlugins/group/calico/KubeletFlags 0.28
364 TestNetworkPlugins/group/custom-flannel/Start 42.78
365 TestNetworkPlugins/group/calico/NetCatPod 12.19
366 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
367 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
368 TestNetworkPlugins/group/calico/DNS 0.13
369 TestNetworkPlugins/group/calico/Localhost 0.11
370 TestNetworkPlugins/group/calico/HairPin 0.11
371 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
372 TestStartStop/group/embed-certs/serial/Pause 2.42
373 TestNetworkPlugins/group/false/Start 65.36
374 TestNetworkPlugins/group/enable-default-cni/Start 70.96
375 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
376 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.19
377 TestNetworkPlugins/group/custom-flannel/DNS 0.13
378 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
379 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
380 TestNetworkPlugins/group/flannel/Start 45.39
381 TestNetworkPlugins/group/false/KubeletFlags 0.27
382 TestNetworkPlugins/group/false/NetCatPod 9.16
383 TestNetworkPlugins/group/false/DNS 0.14
384 TestNetworkPlugins/group/false/Localhost 0.12
385 TestNetworkPlugins/group/false/HairPin 0.12
386 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
387 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.19
388 TestNetworkPlugins/group/kubenet/Start 70.16
389 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
390 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
391 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
392 TestNetworkPlugins/group/flannel/ControllerPod 6.01
393 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
394 TestNetworkPlugins/group/flannel/NetCatPod 9.18
395 TestNetworkPlugins/group/flannel/DNS 0.18
396 TestNetworkPlugins/group/flannel/Localhost 0.14
397 TestNetworkPlugins/group/flannel/HairPin 0.16
398 TestNetworkPlugins/group/bridge/Start 68.65
399 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
400 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
401 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
402 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.42
403 TestNetworkPlugins/group/kubenet/KubeletFlags 0.24
404 TestNetworkPlugins/group/kubenet/NetCatPod 9.16
405 TestNetworkPlugins/group/kubenet/DNS 0.12
406 TestNetworkPlugins/group/kubenet/Localhost 0.1
407 TestNetworkPlugins/group/kubenet/HairPin 0.1
408 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
409 TestNetworkPlugins/group/bridge/NetCatPod 10.17
410 TestNetworkPlugins/group/bridge/DNS 0.12
411 TestNetworkPlugins/group/bridge/Localhost 0.1
412 TestNetworkPlugins/group/bridge/HairPin 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (8.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-248633 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-248633 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (8.438969714s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 20:47:43.005620   16274 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0920 20:47:43.005711   16274 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9514/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-248633
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-248633: exit status 85 (52.612183ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-248633 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |          |
	|         | -p download-only-248633        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 20:47:34
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 20:47:34.601185   16286 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:47:34.601442   16286 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:34.601452   16286 out.go:358] Setting ErrFile to fd 2...
	I0920 20:47:34.601456   16286 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:34.601660   16286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9514/.minikube/bin
	W0920 20:47:34.601771   16286 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19672-9514/.minikube/config/config.json: open /home/jenkins/minikube-integration/19672-9514/.minikube/config/config.json: no such file or directory
	I0920 20:47:34.602279   16286 out.go:352] Setting JSON to true
	I0920 20:47:34.603062   16286 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1803,"bootTime":1726863452,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 20:47:34.603146   16286 start.go:139] virtualization: kvm guest
	I0920 20:47:34.605746   16286 out.go:97] [download-only-248633] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 20:47:34.605843   16286 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19672-9514/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 20:47:34.605881   16286 notify.go:220] Checking for updates...
	I0920 20:47:34.607059   16286 out.go:169] MINIKUBE_LOCATION=19672
	I0920 20:47:34.608492   16286 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 20:47:34.609775   16286 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19672-9514/kubeconfig
	I0920 20:47:34.610981   16286 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9514/.minikube
	I0920 20:47:34.612244   16286 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0920 20:47:34.614403   16286 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 20:47:34.614605   16286 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 20:47:34.636207   16286 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 20:47:34.636255   16286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 20:47:34.972316   16286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 20:47:34.963460238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 20:47:34.972407   16286 docker.go:318] overlay module found
	I0920 20:47:34.974012   16286 out.go:97] Using the docker driver based on user configuration
	I0920 20:47:34.974037   16286 start.go:297] selected driver: docker
	I0920 20:47:34.974042   16286 start.go:901] validating driver "docker" against <nil>
	I0920 20:47:34.974117   16286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 20:47:35.018792   16286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 20:47:35.010435997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 20:47:35.019014   16286 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 20:47:35.019530   16286 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0920 20:47:35.019704   16286 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 20:47:35.021348   16286 out.go:169] Using Docker driver with root privileges
	I0920 20:47:35.022465   16286 cni.go:84] Creating CNI manager for ""
	I0920 20:47:35.022545   16286 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 20:47:35.022617   16286 start.go:340] cluster config:
	{Name:download-only-248633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-248633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 20:47:35.023844   16286 out.go:97] Starting "download-only-248633" primary control-plane node in "download-only-248633" cluster
	I0920 20:47:35.023865   16286 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 20:47:35.024864   16286 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0920 20:47:35.024885   16286 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 20:47:35.025006   16286 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0920 20:47:35.039173   16286 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 20:47:35.039324   16286 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0920 20:47:35.039399   16286 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 20:47:35.048240   16286 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0920 20:47:35.048254   16286 cache.go:56] Caching tarball of preloaded images
	I0920 20:47:35.048338   16286 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 20:47:35.049753   16286 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 20:47:35.049766   16286 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0920 20:47:35.076139   16286 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19672-9514/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0920 20:47:38.089243   16286 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0920 20:47:38.089327   16286 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19672-9514/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0920 20:47:38.887490   16286 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 20:47:38.888253   16286 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/download-only-248633/config.json ...
	I0920 20:47:38.888282   16286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/download-only-248633/config.json: {Name:mka7e7cf410186efaaee856882795d4813a97ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:47:38.888442   16286 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 20:47:38.888618   16286 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19672-9514/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-248633 host does not exist
	  To start a cluster, run: "minikube start -p download-only-248633"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-248633
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (3.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-104332 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-104332 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (3.950478744s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (3.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 20:47:47.295912   16274 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 20:47:47.295945   16274 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9514/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-104332
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-104332: exit status 85 (52.153085ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-248633 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | -p download-only-248633        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| delete  | -p download-only-248633        | download-only-248633 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| start   | -o=json --download-only        | download-only-104332 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | -p download-only-104332        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 20:47:43
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 20:47:43.380875   16656 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:47:43.380997   16656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:43.381007   16656 out.go:358] Setting ErrFile to fd 2...
	I0920 20:47:43.381013   16656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:43.381172   16656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9514/.minikube/bin
	I0920 20:47:43.381694   16656 out.go:352] Setting JSON to true
	I0920 20:47:43.382475   16656 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1811,"bootTime":1726863452,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 20:47:43.382562   16656 start.go:139] virtualization: kvm guest
	I0920 20:47:43.384343   16656 out.go:97] [download-only-104332] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 20:47:43.384473   16656 notify.go:220] Checking for updates...
	I0920 20:47:43.385534   16656 out.go:169] MINIKUBE_LOCATION=19672
	I0920 20:47:43.386784   16656 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 20:47:43.388015   16656 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19672-9514/kubeconfig
	I0920 20:47:43.389083   16656 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9514/.minikube
	I0920 20:47:43.390140   16656 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0920 20:47:43.392089   16656 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 20:47:43.392335   16656 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 20:47:43.412898   16656 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 20:47:43.412956   16656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 20:47:43.454137   16656 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-20 20:47:43.445889384 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 20:47:43.454278   16656 docker.go:318] overlay module found
	I0920 20:47:43.455626   16656 out.go:97] Using the docker driver based on user configuration
	I0920 20:47:43.455649   16656 start.go:297] selected driver: docker
	I0920 20:47:43.455656   16656 start.go:901] validating driver "docker" against <nil>
	I0920 20:47:43.455746   16656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 20:47:43.503038   16656 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-20 20:47:43.49494879 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 20:47:43.503222   16656 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 20:47:43.503908   16656 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0920 20:47:43.504104   16656 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 20:47:43.505488   16656 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-104332 host does not exist
	  To start a cluster, run: "minikube start -p download-only-104332"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-104332
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.93s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-003803 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-003803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-003803
--- PASS: TestDownloadOnlyKic (0.93s)

                                                
                                    
x
+
TestBinaryMirror (0.73s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 20:47:48.784932   16274 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-574210 --alsologtostderr --binary-mirror http://127.0.0.1:39611 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-574210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-574210
--- PASS: TestBinaryMirror (0.73s)

                                                
                                    
x
+
TestOffline (90.97s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-923786 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-923786 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m28.888728773s)
helpers_test.go:175: Cleaning up "offline-docker-923786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-923786
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-923786: (2.07884066s)
--- PASS: TestOffline (90.97s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-135472
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-135472: exit status 85 (49.130943ms)

                                                
                                                
-- stdout --
	* Profile "addons-135472" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-135472"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-135472
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-135472: exit status 85 (48.59865ms)

                                                
                                                
-- stdout --
	* Profile "addons-135472" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-135472"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (208.91s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-135472 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-135472 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m28.905835052s)
--- PASS: TestAddons/Setup (208.91s)

                                                
                                    
x
+
TestAddons/serial/Volcano (37.66s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:835: volcano-scheduler stabilized in 13.922424ms
addons_test.go:843: volcano-admission stabilized in 14.022192ms
addons_test.go:851: volcano-controller stabilized in 14.069772ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-twr95" [3303854b-83cb-4908-9f46-be9f093baa26] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.002763949s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-7d5nf" [796c1e89-f5c9-48c9-b499-641b3f37c3b4] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.002921612s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-54pgt" [b56bfab2-bfc3-4ecf-9850-7698b8452331] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.0031528s
addons_test.go:870: (dbg) Run:  kubectl --context addons-135472 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-135472 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-135472 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [02b863a4-f702-47cd-8836-1a7bbec2e0d4] Pending
helpers_test.go:344: "test-job-nginx-0" [02b863a4-f702-47cd-8836-1a7bbec2e0d4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [02b863a4-f702-47cd-8836-1a7bbec2e0d4] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.002699462s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p addons-135472 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p addons-135472 addons disable volcano --alsologtostderr -v=1: (10.3339055s)
--- PASS: TestAddons/serial/Volcano (37.66s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-135472 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-135472 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-135472 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-135472 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-135472 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [dc3ab027-7629-48fa-9977-01a2a648fe73] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [dc3ab027-7629-48fa-9977-01a2a648fe73] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004681368s
I0920 21:00:38.277617   16274 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-135472 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-135472 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-135472 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-135472 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-135472 addons disable ingress-dns --alsologtostderr -v=1: (1.238959226s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-135472 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-135472 addons disable ingress --alsologtostderr -v=1: (7.981187295s)
--- PASS: TestAddons/parallel/Ingress (18.49s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.56s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7t9vj" [252a3399-d4ef-4abc-b590-6c65806ca3f2] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003676508s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-135472
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-135472: (5.55808637s)
--- PASS: TestAddons/parallel/InspektorGadget (10.56s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
I0920 20:59:58.400974   16274 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:405: metrics-server stabilized in 2.693368ms
I0920 20:59:58.404407   16274 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 20:59:58.404418   16274 kapi.go:107] duration metric: took 3.467862ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-6j7n7" [0f128981-52f4-40ca-a230-ae60a04056dd] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002905109s
addons_test.go:413: (dbg) Run:  kubectl --context addons-135472 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-135472 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.74s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.08s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:505: csi-hostpath-driver pods stabilized in 3.473937ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-135472 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-135472 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7bfb5ed4-5779-419b-92a6-1d9c864fd8ea] Pending
helpers_test.go:344: "task-pv-pod" [7bfb5ed4-5779-419b-92a6-1d9c864fd8ea] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7bfb5ed4-5779-419b-92a6-1d9c864fd8ea] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003190749s
addons_test.go:528: (dbg) Run:  kubectl --context addons-135472 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-135472 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-135472 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-135472 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-135472 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-135472 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-135472 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [25191637-c6fa-4f50-8c33-454b1e5853de] Pending
helpers_test.go:344: "task-pv-pod-restore" [25191637-c6fa-4f50-8c33-454b1e5853de] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [25191637-c6fa-4f50-8c33-454b1e5853de] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.002953871s
addons_test.go:570: (dbg) Run:  kubectl --context addons-135472 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-135472 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-135472 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-135472 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-135472 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.713043687s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-135472 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.08s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-135472 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-qztjn" [25634d76-15f7-467a-ad72-b7d7c9cd1560] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-qztjn" [25634d76-15f7-467a-ad72-b7d7c9cd1560] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.002719566s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-135472 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-135472 addons disable headlamp --alsologtostderr -v=1: (5.686789077s)
--- PASS: TestAddons/parallel/Headlamp (16.42s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-hx4ds" [d343f95d-3684-4afe-8994-57553b7ffee5] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002873399s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-135472
--- PASS: TestAddons/parallel/CloudSpanner (5.41s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.17s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-135472 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-135472 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135472 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f918fbd4-0d27-4546-9895-4f0c333d2ee5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f918fbd4-0d27-4546-9895-4f0c333d2ee5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f918fbd4-0d27-4546-9895-4f0c333d2ee5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003375361s
addons_test.go:938: (dbg) Run:  kubectl --context addons-135472 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-135472 ssh "cat /opt/local-path-provisioner/pvc-1b91b000-5e84-4be3-a317-9707e25013f8_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-135472 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-135472 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-135472 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.17s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.44s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-nc7bn" [1891b171-e5c4-4a39-bf97-52c73162793d] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004942983s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-135472
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.44s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-dl9zq" [402e7d18-8f1e-444b-8ccd-8b2fdfd79a00] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.002734214s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-135472 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-135472 addons disable yakd --alsologtostderr -v=1: (5.533625818s)
--- PASS: TestAddons/parallel/Yakd (10.54s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.02s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-135472
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-135472: (10.806788716s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-135472
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-135472
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-135472
--- PASS: TestAddons/StoppedEnableDisable (11.02s)

                                                
                                    
x
+
TestCertOptions (23.28s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-368935 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-368935 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (20.605595588s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-368935 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-368935 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-368935 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-368935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-368935
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-368935: (2.081369365s)
--- PASS: TestCertOptions (23.28s)

                                                
                                    
x
+
TestCertExpiration (227.31s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-340839 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-340839 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (25.267863337s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-340839 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-340839 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (19.976817897s)
helpers_test.go:175: Cleaning up "cert-expiration-340839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-340839
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-340839: (2.060873853s)
--- PASS: TestCertExpiration (227.31s)

                                                
                                    
x
+
TestDockerFlags (29.35s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-663519 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-663519 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (26.669432661s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-663519 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-663519 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-663519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-663519
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-663519: (2.127516584s)
--- PASS: TestDockerFlags (29.35s)

                                                
                                    
x
+
TestForceSystemdFlag (26.49s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-976137 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-976137 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (24.010182146s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-976137 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-976137" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-976137
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-976137: (2.13758098s)
--- PASS: TestForceSystemdFlag (26.49s)

                                                
                                    
x
+
TestForceSystemdEnv (25.18s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-366082 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-366082 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (22.298802078s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-366082 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-366082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-366082
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-366082: (2.527615262s)
--- PASS: TestForceSystemdEnv (25.18s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.21s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0920 21:32:40.146925   16274 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 21:32:40.147080   16274 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0920 21:32:40.180057   16274 install.go:62] docker-machine-driver-kvm2: exit status 1
W0920 21:32:40.180491   16274 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0920 21:32:40.180565   16274 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3371043700/001/docker-machine-driver-kvm2
I0920 21:32:40.296839   16274 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3371043700/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc000531370 gz:0xc000531378 tar:0xc000531320 tar.bz2:0xc000531330 tar.gz:0xc000531340 tar.xz:0xc000531350 tar.zst:0xc000531360 tbz2:0xc000531330 tgz:0xc000531340 txz:0xc000531350 tzst:0xc000531360 xz:0xc000531380 zip:0xc000531390 zst:0xc000531388] Getters:map[file:0xc001a83020 http:0xc000637270 https:0xc000637400] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0920 21:32:40.296890   16274 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3371043700/001/docker-machine-driver-kvm2
I0920 21:32:40.849618   16274 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 21:32:40.849682   16274 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0920 21:32:40.876572   16274 install.go:137] /home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0920 21:32:40.876611   16274 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0920 21:32:40.876682   16274 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0920 21:32:40.876708   16274 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3371043700/002/docker-machine-driver-kvm2
I0920 21:32:40.899676   16274 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3371043700/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc000531370 gz:0xc000531378 tar:0xc000531320 tar.bz2:0xc000531330 tar.gz:0xc000531340 tar.xz:0xc000531350 tar.zst:0xc000531360 tbz2:0xc000531330 tgz:0xc000531340 txz:0xc000531350 tzst:0xc000531360 xz:0xc000531380 zip:0xc000531390 zst:0xc000531388] Getters:map[file:0xc001bf37e0 http:0xc00028dae0 https:0xc00028db30] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0920 21:32:40.899723   16274 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3371043700/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.21s)

                                                
                                    
x
+
TestErrorSpam/setup (19.84s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-617404 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-617404 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-617404 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-617404 --driver=docker  --container-runtime=docker: (19.836656849s)
--- PASS: TestErrorSpam/setup (19.84s)

                                                
                                    
x
+
TestErrorSpam/start (0.51s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617404 --log_dir /tmp/nospam-617404 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617404 --log_dir /tmp/nospam-617404 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617404 --log_dir /tmp/nospam-617404 start --dry-run
--- PASS: TestErrorSpam/start (0.51s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617404 --log_dir /tmp/nospam-617404 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617404 --log_dir /tmp/nospam-617404 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617404 --log_dir /tmp/nospam-617404 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617404 --log_dir /tmp/nospam-617404 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617404 --log_dir /tmp/nospam-617404 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617404 --log_dir /tmp/nospam-617404 pause
--- PASS: TestErrorSpam/pause (1.07s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617404 --log_dir /tmp/nospam-617404 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617404 --log_dir /tmp/nospam-617404 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617404 --log_dir /tmp/nospam-617404 unpause
--- PASS: TestErrorSpam/unpause (1.16s)

                                                
                                    
x
+
TestErrorSpam/stop (10.78s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617404 --log_dir /tmp/nospam-617404 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-617404 --log_dir /tmp/nospam-617404 stop: (10.623928377s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617404 --log_dir /tmp/nospam-617404 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617404 --log_dir /tmp/nospam-617404 stop
--- PASS: TestErrorSpam/stop (10.78s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19672-9514/.minikube/files/etc/test/nested/copy/16274/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (70.14s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-284790 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-284790 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m10.134821019s)
--- PASS: TestFunctional/serial/StartWithProxy (70.14s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.45s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 21:03:11.227193   16274 config.go:182] Loaded profile config "functional-284790": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-284790 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-284790 --alsologtostderr -v=8: (29.453566148s)
functional_test.go:663: soft start took 29.454353354s for "functional-284790" cluster.
I0920 21:03:40.681203   16274 config.go:182] Loaded profile config "functional-284790": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (29.45s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-284790 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-284790 /tmp/TestFunctionalserialCacheCmdcacheadd_local676964616/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 cache add minikube-local-cache-test:functional-284790
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 cache delete minikube-local-cache-test:functional-284790
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-284790
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-284790 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (244.360825ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 kubectl -- --context functional-284790 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-284790 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.49s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-284790 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-284790 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.484942467s)
functional_test.go:761: restart took 37.485083345s for "functional-284790" cluster.
I0920 21:04:23.046485   16274 config.go:182] Loaded profile config "functional-284790": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (37.49s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-284790 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 logs
--- PASS: TestFunctional/serial/LogsCmd (0.87s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 logs --file /tmp/TestFunctionalserialLogsFileCmd2831627749/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.89s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.79s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-284790 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-284790
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-284790: exit status 115 (294.154616ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32165 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-284790 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.79s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-284790 config get cpus: exit status 14 (49.429157ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-284790 config get cpus: exit status 14 (58.40742ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-284790 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-284790 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 67171: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.08s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-284790 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-284790 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (149.764934ms)

                                                
                                                
-- stdout --
	* [functional-284790] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9514/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9514/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 21:04:34.551446   66687 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:04:34.551531   66687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:04:34.551537   66687 out.go:358] Setting ErrFile to fd 2...
	I0920 21:04:34.551543   66687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:04:34.551719   66687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9514/.minikube/bin
	I0920 21:04:34.552223   66687 out.go:352] Setting JSON to false
	I0920 21:04:34.553253   66687 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2822,"bootTime":1726863452,"procs":350,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 21:04:34.553336   66687 start.go:139] virtualization: kvm guest
	I0920 21:04:34.554908   66687 out.go:177] * [functional-284790] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 21:04:34.556394   66687 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 21:04:34.556439   66687 notify.go:220] Checking for updates...
	I0920 21:04:34.559384   66687 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 21:04:34.561056   66687 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9514/kubeconfig
	I0920 21:04:34.562521   66687 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9514/.minikube
	I0920 21:04:34.564014   66687 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 21:04:34.565344   66687 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 21:04:34.566949   66687 config.go:182] Loaded profile config "functional-284790": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 21:04:34.567535   66687 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 21:04:34.592845   66687 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 21:04:34.592952   66687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 21:04:34.645809   66687 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 21:04:34.635134942 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 21:04:34.645948   66687 docker.go:318] overlay module found
	I0920 21:04:34.647574   66687 out.go:177] * Using the docker driver based on existing profile
	I0920 21:04:34.648564   66687 start.go:297] selected driver: docker
	I0920 21:04:34.648575   66687 start.go:901] validating driver "docker" against &{Name:functional-284790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-284790 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:04:34.648691   66687 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 21:04:34.650473   66687 out.go:201] 
	W0920 21:04:34.651509   66687 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 21:04:34.652587   66687 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-284790 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-284790 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-284790 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (134.659931ms)

                                                
                                                
-- stdout --
	* [functional-284790] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9514/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9514/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 21:04:34.410220   66610 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:04:34.410317   66610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:04:34.410326   66610 out.go:358] Setting ErrFile to fd 2...
	I0920 21:04:34.410331   66610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:04:34.410550   66610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9514/.minikube/bin
	I0920 21:04:34.411011   66610 out.go:352] Setting JSON to false
	I0920 21:04:34.412028   66610 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2822,"bootTime":1726863452,"procs":350,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 21:04:34.412111   66610 start.go:139] virtualization: kvm guest
	I0920 21:04:34.414155   66610 out.go:177] * [functional-284790] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0920 21:04:34.415464   66610 notify.go:220] Checking for updates...
	I0920 21:04:34.415469   66610 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 21:04:34.416877   66610 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 21:04:34.418303   66610 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9514/kubeconfig
	I0920 21:04:34.419603   66610 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9514/.minikube
	I0920 21:04:34.420804   66610 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 21:04:34.421992   66610 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 21:04:34.423353   66610 config.go:182] Loaded profile config "functional-284790": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 21:04:34.423783   66610 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 21:04:34.445475   66610 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 21:04:34.445568   66610 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 21:04:34.495692   66610 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 21:04:34.484830509 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 21:04:34.495854   66610 docker.go:318] overlay module found
	I0920 21:04:34.497538   66610 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0920 21:04:34.498798   66610 start.go:297] selected driver: docker
	I0920 21:04:34.498818   66610 start.go:901] validating driver "docker" against &{Name:functional-284790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-284790 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:04:34.498960   66610 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 21:04:34.501207   66610 out.go:201] 
	W0920 21:04:34.502381   66610 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 21:04:34.503502   66610 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-284790 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-284790 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-tx7h8" [3f733133-c19a-41d3-9133-5497ac8eb8aa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-tx7h8" [3f733133-c19a-41d3-9133-5497ac8eb8aa] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004472966s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32367
functional_test.go:1675: http://192.168.49.2:32367: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-tx7h8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32367
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [86fca3c4-ba2d-46a0-be3b-7341c3cb164f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003281755s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-284790 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-284790 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-284790 get pvc myclaim -o=json
I0920 21:04:49.238148   16274 retry.go:31] will retry after 1.063421037s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:3a9da7eb-6b06-4808-b48d-634f50f107e6 ResourceVersion:862 Generation:0 CreationTimestamp:2024-09-20 21:04:49 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0020acc90 VolumeMode:0xc0020acca0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-284790 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-284790 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [796bfaef-bfc2-4267-a1f2-8f6855850efd] Pending
helpers_test.go:344: "sp-pod" [796bfaef-bfc2-4267-a1f2-8f6855850efd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [796bfaef-bfc2-4267-a1f2-8f6855850efd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003546832s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-284790 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-284790 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-284790 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e7bfbc79-df42-4211-83d6-4a037b99036d] Pending
helpers_test.go:344: "sp-pod" [e7bfbc79-df42-4211-83d6-4a037b99036d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003523934s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-284790 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.71s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh -n functional-284790 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 cp functional-284790:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd128486186/001/cp-test.txt
2024/09/20 21:04:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh -n functional-284790 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh -n functional-284790 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-284790 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-kxwsn" [9c717ce4-5d84-4617-a422-8536415e683f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-kxwsn" [9c717ce4-5d84-4617-a422-8536415e683f] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.003594769s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-284790 exec mysql-6cdb49bbb-kxwsn -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-284790 exec mysql-6cdb49bbb-kxwsn -- mysql -ppassword -e "show databases;": exit status 1 (110.367651ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 21:04:58.980613   16274 retry.go:31] will retry after 1.426367938s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-284790 exec mysql-6cdb49bbb-kxwsn -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-284790 exec mysql-6cdb49bbb-kxwsn -- mysql -ppassword -e "show databases;": exit status 1 (98.27556ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 21:05:00.506525   16274 retry.go:31] will retry after 2.124070084s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-284790 exec mysql-6cdb49bbb-kxwsn -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.19s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/16274/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh "sudo cat /etc/test/nested/copy/16274/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/16274.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh "sudo cat /etc/ssl/certs/16274.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/16274.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh "sudo cat /usr/share/ca-certificates/16274.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/162742.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh "sudo cat /etc/ssl/certs/162742.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/162742.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh "sudo cat /usr/share/ca-certificates/162742.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-284790 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-284790 ssh "sudo systemctl is-active crio": exit status 1 (275.766149ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-284790 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-284790
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-284790
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-284790 image ls --format short --alsologtostderr:
I0920 21:04:45.824943   71589 out.go:345] Setting OutFile to fd 1 ...
I0920 21:04:45.825058   71589 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:04:45.825069   71589 out.go:358] Setting ErrFile to fd 2...
I0920 21:04:45.825076   71589 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:04:45.825370   71589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9514/.minikube/bin
I0920 21:04:45.826023   71589 config.go:182] Loaded profile config "functional-284790": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 21:04:45.826137   71589 config.go:182] Loaded profile config "functional-284790": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 21:04:45.826684   71589 cli_runner.go:164] Run: docker container inspect functional-284790 --format={{.State.Status}}
I0920 21:04:45.843973   71589 ssh_runner.go:195] Run: systemctl --version
I0920 21:04:45.844031   71589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-284790
I0920 21:04:45.863730   71589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/functional-284790/id_rsa Username:docker}
I0920 21:04:45.957635   71589 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-284790 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-284790 | acce02a91c69c | 30B    |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| localhost/my-image                          | functional-284790 | fc45dfb559c3c | 1.24MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/kicbase/echo-server               | functional-284790 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-284790 image ls --format table --alsologtostderr:
I0920 21:04:50.022387   72155 out.go:345] Setting OutFile to fd 1 ...
I0920 21:04:50.022644   72155 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:04:50.022652   72155 out.go:358] Setting ErrFile to fd 2...
I0920 21:04:50.022657   72155 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:04:50.022809   72155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9514/.minikube/bin
I0920 21:04:50.023366   72155 config.go:182] Loaded profile config "functional-284790": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 21:04:50.023453   72155 config.go:182] Loaded profile config "functional-284790": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 21:04:50.023943   72155 cli_runner.go:164] Run: docker container inspect functional-284790 --format={{.State.Status}}
I0920 21:04:50.046587   72155 ssh_runner.go:195] Run: systemctl --version
I0920 21:04:50.046627   72155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-284790
I0920 21:04:50.065689   72155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/functional-284790/id_rsa Username:docker}
I0920 21:04:50.157791   72155 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-284790 image ls --format json --alsologtostderr:
[{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-284790"],"size":"4940000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff620
7064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"acce02a91c69c222ef416e9f213b6e5ab196b620040d39a2e2ee1fc844f3ac7d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-284790"],"size":"30"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed4
3e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"fc45dfb559c3c7c7b8acf8cd21ca20f748737d765ad0f8751f109f81dddad7e3","repoDigests":[],"repoTags":["localhost/my-image:functional-284790"],"size":"1240000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-284790 image ls --format json --alsologtostderr:
I0920 21:04:49.814092   72104 out.go:345] Setting OutFile to fd 1 ...
I0920 21:04:49.814386   72104 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:04:49.814401   72104 out.go:358] Setting ErrFile to fd 2...
I0920 21:04:49.814408   72104 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:04:49.814687   72104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9514/.minikube/bin
I0920 21:04:49.815211   72104 config.go:182] Loaded profile config "functional-284790": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 21:04:49.815340   72104 config.go:182] Loaded profile config "functional-284790": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 21:04:49.815887   72104 cli_runner.go:164] Run: docker container inspect functional-284790 --format={{.State.Status}}
I0920 21:04:49.833763   72104 ssh_runner.go:195] Run: systemctl --version
I0920 21:04:49.833805   72104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-284790
I0920 21:04:49.851103   72104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/functional-284790/id_rsa Username:docker}
I0920 21:04:49.941403   72104 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-284790 image ls --format yaml --alsologtostderr:
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: acce02a91c69c222ef416e9f213b6e5ab196b620040d39a2e2ee1fc844f3ac7d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-284790
size: "30"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-284790
size: "4940000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-284790 image ls --format yaml --alsologtostderr:
I0920 21:04:46.034572   71652 out.go:345] Setting OutFile to fd 1 ...
I0920 21:04:46.034675   71652 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:04:46.034686   71652 out.go:358] Setting ErrFile to fd 2...
I0920 21:04:46.034690   71652 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:04:46.034878   71652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9514/.minikube/bin
I0920 21:04:46.035483   71652 config.go:182] Loaded profile config "functional-284790": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 21:04:46.035614   71652 config.go:182] Loaded profile config "functional-284790": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 21:04:46.036020   71652 cli_runner.go:164] Run: docker container inspect functional-284790 --format={{.State.Status}}
I0920 21:04:46.052536   71652 ssh_runner.go:195] Run: systemctl --version
I0920 21:04:46.052571   71652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-284790
I0920 21:04:46.069945   71652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/functional-284790/id_rsa Username:docker}
I0920 21:04:46.158015   71652 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-284790 ssh pgrep buildkitd: exit status 1 (277.491587ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 image build -t localhost/my-image:functional-284790 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-284790 image build -t localhost/my-image:functional-284790 testdata/build --alsologtostderr: (3.083947456s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-284790 image build -t localhost/my-image:functional-284790 testdata/build --alsologtostderr:
I0920 21:04:46.511658   71802 out.go:345] Setting OutFile to fd 1 ...
I0920 21:04:46.511942   71802 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:04:46.511955   71802 out.go:358] Setting ErrFile to fd 2...
I0920 21:04:46.511963   71802 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:04:46.512174   71802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9514/.minikube/bin
I0920 21:04:46.512981   71802 config.go:182] Loaded profile config "functional-284790": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 21:04:46.513663   71802 config.go:182] Loaded profile config "functional-284790": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 21:04:46.514161   71802 cli_runner.go:164] Run: docker container inspect functional-284790 --format={{.State.Status}}
I0920 21:04:46.533714   71802 ssh_runner.go:195] Run: systemctl --version
I0920 21:04:46.533759   71802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-284790
I0920 21:04:46.548997   71802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/functional-284790/id_rsa Username:docker}
I0920 21:04:46.661892   71802 build_images.go:161] Building image from path: /tmp/build.3541717351.tar
I0920 21:04:46.661967   71802 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 21:04:46.672055   71802 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3541717351.tar
I0920 21:04:46.675595   71802 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3541717351.tar: stat -c "%s %y" /var/lib/minikube/build/build.3541717351.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3541717351.tar': No such file or directory
I0920 21:04:46.675625   71802 ssh_runner.go:362] scp /tmp/build.3541717351.tar --> /var/lib/minikube/build/build.3541717351.tar (3072 bytes)
I0920 21:04:46.699361   71802 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3541717351
I0920 21:04:46.708736   71802 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3541717351 -xf /var/lib/minikube/build/build.3541717351.tar
I0920 21:04:46.761951   71802 docker.go:360] Building image: /var/lib/minikube/build/build.3541717351
I0920 21:04:46.761997   71802 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-284790 /var/lib/minikube/build/build.3541717351
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.9s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:fc45dfb559c3c7c7b8acf8cd21ca20f748737d765ad0f8751f109f81dddad7e3 done
#8 naming to localhost/my-image:functional-284790 done
#8 DONE 0.0s
I0920 21:04:49.527644   71802 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-284790 /var/lib/minikube/build/build.3541717351: (2.76560996s)
I0920 21:04:49.527721   71802 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3541717351
I0920 21:04:49.536736   71802 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3541717351.tar
I0920 21:04:49.544842   71802 build_images.go:217] Built localhost/my-image:functional-284790 from /tmp/build.3541717351.tar
I0920 21:04:49.544871   71802 build_images.go:133] succeeded building to: functional-284790
I0920 21:04:49.544878   71802 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-284790
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-284790 docker-env) && out/minikube-linux-amd64 status -p functional-284790"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-284790 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-284790 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-284790 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-9vq2p" [ee8a92f2-e6d4-42e8-a9af-814e8ada0a37] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-9vq2p" [ee8a92f2-e6d4-42e8-a9af-814e8ada0a37] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004061726s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 image load --daemon kicbase/echo-server:functional-284790 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 image load --daemon kicbase/echo-server:functional-284790 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "373.247522ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "56.268195ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "369.520814ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "45.995634ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-284790
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 image load --daemon kicbase/echo-server:functional-284790 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-284790 /tmp/TestFunctionalparallelMountCmdany-port2466677075/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726866271246167541" to /tmp/TestFunctionalparallelMountCmdany-port2466677075/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726866271246167541" to /tmp/TestFunctionalparallelMountCmdany-port2466677075/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726866271246167541" to /tmp/TestFunctionalparallelMountCmdany-port2466677075/001/test-1726866271246167541
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-284790 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (321.189648ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 21:04:31.567671   16274 retry.go:31] will retry after 437.318705ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 21:04 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 21:04 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 21:04 test-1726866271246167541
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh cat /mount-9p/test-1726866271246167541
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-284790 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [dfcdb6ab-1236-4757-9d31-f4689cc9ad1b] Pending
helpers_test.go:344: "busybox-mount" [dfcdb6ab-1236-4757-9d31-f4689cc9ad1b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [dfcdb6ab-1236-4757-9d31-f4689cc9ad1b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [dfcdb6ab-1236-4757-9d31-f4689cc9ad1b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004399107s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-284790 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-284790 /tmp/TestFunctionalparallelMountCmdany-port2466677075/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 image save kicbase/echo-server:functional-284790 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 image rm kicbase/echo-server:functional-284790 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-284790
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 image save --daemon kicbase/echo-server:functional-284790 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-284790
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-284790 /tmp/TestFunctionalparallelMountCmdspecific-port2975486695/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-284790 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (262.575716ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 21:04:38.082189   16274 retry.go:31] will retry after 281.622049ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-284790 /tmp/TestFunctionalparallelMountCmdspecific-port2975486695/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-284790 ssh "sudo umount -f /mount-9p": exit status 1 (265.211285ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-284790 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-284790 /tmp/TestFunctionalparallelMountCmdspecific-port2975486695/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-284790 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1065796073/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-284790 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1065796073/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-284790 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1065796073/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-284790 ssh "findmnt -T" /mount1: exit status 1 (418.175076ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 21:04:39.772802   16274 retry.go:31] will retry after 451.189555ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-284790 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-284790 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1065796073/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-284790 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1065796073/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-284790 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1065796073/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-linux-amd64 -p functional-284790 service list -o json: (1.300539245s)
functional_test.go:1494: Took "1.300622474s" to run "out/minikube-linux-amd64 -p functional-284790 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31519
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-284790 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31519
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-284790 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-284790 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-284790 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 70871: os: process already finished
helpers_test.go:502: unable to terminate pid 70628: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-284790 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-284790 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-284790 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [686cf42d-28da-4dc3-8aa9-5cfa7d09365a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [686cf42d-28da-4dc3-8aa9-5cfa7d09365a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 17.003012336s
I0920 21:05:01.205945   16274 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-284790 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.69.187 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-284790 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-284790
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-284790
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-284790
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (96.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-612860 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0920 21:06:18.468555   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:18.475024   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:18.486372   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:18.507890   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:18.549248   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:18.630627   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:18.792150   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:19.113813   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:19.755954   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:21.037562   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:23.599196   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:28.721272   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:38.963347   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-612860 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m36.320381037s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (96.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-612860 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-612860 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-612860 -- rollout status deployment/busybox: (2.964722644s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-612860 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-612860 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-612860 -- exec busybox-7dff88458-5b2gg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-612860 -- exec busybox-7dff88458-gbsgf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-612860 -- exec busybox-7dff88458-rs78h -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-612860 -- exec busybox-7dff88458-5b2gg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-612860 -- exec busybox-7dff88458-gbsgf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-612860 -- exec busybox-7dff88458-rs78h -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-612860 -- exec busybox-7dff88458-5b2gg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-612860 -- exec busybox-7dff88458-gbsgf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-612860 -- exec busybox-7dff88458-rs78h -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-612860 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-612860 -- exec busybox-7dff88458-5b2gg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-612860 -- exec busybox-7dff88458-5b2gg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-612860 -- exec busybox-7dff88458-gbsgf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-612860 -- exec busybox-7dff88458-gbsgf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-612860 -- exec busybox-7dff88458-rs78h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-612860 -- exec busybox-7dff88458-rs78h -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (19.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-612860 -v=7 --alsologtostderr
E0920 21:06:59.445666   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-612860 -v=7 --alsologtostderr: (18.749386792s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (19.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-612860 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 cp testdata/cp-test.txt ha-612860:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 cp ha-612860:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3242566781/001/cp-test_ha-612860.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 cp ha-612860:/home/docker/cp-test.txt ha-612860-m02:/home/docker/cp-test_ha-612860_ha-612860-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m02 "sudo cat /home/docker/cp-test_ha-612860_ha-612860-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 cp ha-612860:/home/docker/cp-test.txt ha-612860-m03:/home/docker/cp-test_ha-612860_ha-612860-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m03 "sudo cat /home/docker/cp-test_ha-612860_ha-612860-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 cp ha-612860:/home/docker/cp-test.txt ha-612860-m04:/home/docker/cp-test_ha-612860_ha-612860-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m04 "sudo cat /home/docker/cp-test_ha-612860_ha-612860-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 cp testdata/cp-test.txt ha-612860-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 cp ha-612860-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3242566781/001/cp-test_ha-612860-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 cp ha-612860-m02:/home/docker/cp-test.txt ha-612860:/home/docker/cp-test_ha-612860-m02_ha-612860.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860 "sudo cat /home/docker/cp-test_ha-612860-m02_ha-612860.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 cp ha-612860-m02:/home/docker/cp-test.txt ha-612860-m03:/home/docker/cp-test_ha-612860-m02_ha-612860-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m03 "sudo cat /home/docker/cp-test_ha-612860-m02_ha-612860-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 cp ha-612860-m02:/home/docker/cp-test.txt ha-612860-m04:/home/docker/cp-test_ha-612860-m02_ha-612860-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m04 "sudo cat /home/docker/cp-test_ha-612860-m02_ha-612860-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 cp testdata/cp-test.txt ha-612860-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 cp ha-612860-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3242566781/001/cp-test_ha-612860-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 cp ha-612860-m03:/home/docker/cp-test.txt ha-612860:/home/docker/cp-test_ha-612860-m03_ha-612860.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860 "sudo cat /home/docker/cp-test_ha-612860-m03_ha-612860.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 cp ha-612860-m03:/home/docker/cp-test.txt ha-612860-m02:/home/docker/cp-test_ha-612860-m03_ha-612860-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m02 "sudo cat /home/docker/cp-test_ha-612860-m03_ha-612860-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 cp ha-612860-m03:/home/docker/cp-test.txt ha-612860-m04:/home/docker/cp-test_ha-612860-m03_ha-612860-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m04 "sudo cat /home/docker/cp-test_ha-612860-m03_ha-612860-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 cp testdata/cp-test.txt ha-612860-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 cp ha-612860-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3242566781/001/cp-test_ha-612860-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 cp ha-612860-m04:/home/docker/cp-test.txt ha-612860:/home/docker/cp-test_ha-612860-m04_ha-612860.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860 "sudo cat /home/docker/cp-test_ha-612860-m04_ha-612860.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 cp ha-612860-m04:/home/docker/cp-test.txt ha-612860-m02:/home/docker/cp-test_ha-612860-m04_ha-612860-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m02 "sudo cat /home/docker/cp-test_ha-612860-m04_ha-612860-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 cp ha-612860-m04:/home/docker/cp-test.txt ha-612860-m03:/home/docker/cp-test_ha-612860-m04_ha-612860-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 ssh -n ha-612860-m03 "sudo cat /home/docker/cp-test_ha-612860-m04_ha-612860-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 node stop m02 -v=7 --alsologtostderr
E0920 21:07:40.407820   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-612860 node stop m02 -v=7 --alsologtostderr: (10.705011479s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-612860 status -v=7 --alsologtostderr: exit status 7 (614.410541ms)

                                                
                                                
-- stdout --
	ha-612860
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-612860-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-612860-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-612860-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 21:07:41.005159  100409 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:07:41.005254  100409 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:07:41.005261  100409 out.go:358] Setting ErrFile to fd 2...
	I0920 21:07:41.005265  100409 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:07:41.005440  100409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9514/.minikube/bin
	I0920 21:07:41.005719  100409 out.go:352] Setting JSON to false
	I0920 21:07:41.005749  100409 mustload.go:65] Loading cluster: ha-612860
	I0920 21:07:41.005862  100409 notify.go:220] Checking for updates...
	I0920 21:07:41.006123  100409 config.go:182] Loaded profile config "ha-612860": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 21:07:41.006141  100409 status.go:174] checking status of ha-612860 ...
	I0920 21:07:41.006517  100409 cli_runner.go:164] Run: docker container inspect ha-612860 --format={{.State.Status}}
	I0920 21:07:41.022947  100409 status.go:364] ha-612860 host status = "Running" (err=<nil>)
	I0920 21:07:41.022982  100409 host.go:66] Checking if "ha-612860" exists ...
	I0920 21:07:41.023212  100409 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-612860
	I0920 21:07:41.039314  100409 host.go:66] Checking if "ha-612860" exists ...
	I0920 21:07:41.039550  100409 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 21:07:41.039619  100409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-612860
	I0920 21:07:41.054855  100409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/ha-612860/id_rsa Username:docker}
	I0920 21:07:41.146024  100409 ssh_runner.go:195] Run: systemctl --version
	I0920 21:07:41.149617  100409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:07:41.158886  100409 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 21:07:41.207924  100409 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-20 21:07:41.199034284 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 21:07:41.208631  100409 kubeconfig.go:125] found "ha-612860" server: "https://192.168.49.254:8443"
	I0920 21:07:41.208663  100409 api_server.go:166] Checking apiserver status ...
	I0920 21:07:41.208703  100409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 21:07:41.219291  100409 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2401/cgroup
	I0920 21:07:41.227642  100409 api_server.go:182] apiserver freezer: "12:freezer:/docker/080498d76334404d3c3db3a8a324083a4a61a3935645ec88eff55163bb9aae61/kubepods/burstable/podef516ab1b2bd82687aa86c557b6bd308/8ba84c50b9e4ccc33eb0ad52391acec527add5ca9f83d3580951f676ee943fbe"
	I0920 21:07:41.227705  100409 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/080498d76334404d3c3db3a8a324083a4a61a3935645ec88eff55163bb9aae61/kubepods/burstable/podef516ab1b2bd82687aa86c557b6bd308/8ba84c50b9e4ccc33eb0ad52391acec527add5ca9f83d3580951f676ee943fbe/freezer.state
	I0920 21:07:41.234757  100409 api_server.go:204] freezer state: "THAWED"
	I0920 21:07:41.234780  100409 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 21:07:41.238239  100409 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 21:07:41.238262  100409 status.go:456] ha-612860 apiserver status = Running (err=<nil>)
	I0920 21:07:41.238272  100409 status.go:176] ha-612860 status: &{Name:ha-612860 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 21:07:41.238289  100409 status.go:174] checking status of ha-612860-m02 ...
	I0920 21:07:41.238516  100409 cli_runner.go:164] Run: docker container inspect ha-612860-m02 --format={{.State.Status}}
	I0920 21:07:41.254192  100409 status.go:364] ha-612860-m02 host status = "Stopped" (err=<nil>)
	I0920 21:07:41.254206  100409 status.go:377] host is not running, skipping remaining checks
	I0920 21:07:41.254211  100409 status.go:176] ha-612860-m02 status: &{Name:ha-612860-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 21:07:41.254225  100409 status.go:174] checking status of ha-612860-m03 ...
	I0920 21:07:41.254433  100409 cli_runner.go:164] Run: docker container inspect ha-612860-m03 --format={{.State.Status}}
	I0920 21:07:41.270418  100409 status.go:364] ha-612860-m03 host status = "Running" (err=<nil>)
	I0920 21:07:41.270439  100409 host.go:66] Checking if "ha-612860-m03" exists ...
	I0920 21:07:41.270656  100409 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-612860-m03
	I0920 21:07:41.286081  100409 host.go:66] Checking if "ha-612860-m03" exists ...
	I0920 21:07:41.286314  100409 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 21:07:41.286358  100409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-612860-m03
	I0920 21:07:41.300864  100409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/ha-612860-m03/id_rsa Username:docker}
	I0920 21:07:41.389981  100409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:07:41.399902  100409 kubeconfig.go:125] found "ha-612860" server: "https://192.168.49.254:8443"
	I0920 21:07:41.399926  100409 api_server.go:166] Checking apiserver status ...
	I0920 21:07:41.399956  100409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 21:07:41.409244  100409 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2224/cgroup
	I0920 21:07:41.416726  100409 api_server.go:182] apiserver freezer: "12:freezer:/docker/f8a9c50ad10113b5498d113b6e52882da50ca179cbb4cb8e228eee2594b5aa43/kubepods/burstable/pod8d618afd52aae7e9fc53c169593bcc85/bd8b130394981e11b71f8cb804377075e3dd7415b9a49fe15c896baf3cb4bb54"
	I0920 21:07:41.416773  100409 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f8a9c50ad10113b5498d113b6e52882da50ca179cbb4cb8e228eee2594b5aa43/kubepods/burstable/pod8d618afd52aae7e9fc53c169593bcc85/bd8b130394981e11b71f8cb804377075e3dd7415b9a49fe15c896baf3cb4bb54/freezer.state
	I0920 21:07:41.423633  100409 api_server.go:204] freezer state: "THAWED"
	I0920 21:07:41.423664  100409 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 21:07:41.428098  100409 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 21:07:41.428115  100409 status.go:456] ha-612860-m03 apiserver status = Running (err=<nil>)
	I0920 21:07:41.428122  100409 status.go:176] ha-612860-m03 status: &{Name:ha-612860-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 21:07:41.428142  100409 status.go:174] checking status of ha-612860-m04 ...
	I0920 21:07:41.428349  100409 cli_runner.go:164] Run: docker container inspect ha-612860-m04 --format={{.State.Status}}
	I0920 21:07:41.444343  100409 status.go:364] ha-612860-m04 host status = "Running" (err=<nil>)
	I0920 21:07:41.444361  100409 host.go:66] Checking if "ha-612860-m04" exists ...
	I0920 21:07:41.444581  100409 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-612860-m04
	I0920 21:07:41.459865  100409 host.go:66] Checking if "ha-612860-m04" exists ...
	I0920 21:07:41.460082  100409 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 21:07:41.460123  100409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-612860-m04
	I0920 21:07:41.476192  100409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/ha-612860-m04/id_rsa Username:docker}
	I0920 21:07:41.565813  100409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:07:41.575568  100409 status.go:176] ha-612860-m04 status: &{Name:ha-612860-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (65.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-612860 node start m02 -v=7 --alsologtostderr: (1m4.441445311s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (65.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (230.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-612860 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-612860 -v=7 --alsologtostderr
E0920 21:09:02.329123   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-612860 -v=7 --alsologtostderr: (33.470373567s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-612860 --wait=true -v=7 --alsologtostderr
E0920 21:09:29.131256   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:09:29.137494   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:09:29.149136   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:09:29.172712   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:09:29.214134   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:09:29.295513   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:09:29.457432   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:09:29.779505   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:09:30.421024   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:09:31.702723   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:09:34.264111   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:09:39.385774   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:09:49.627870   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:10:10.110137   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:10:51.073043   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:11:18.469054   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:11:46.171452   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:12:12.995078   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-612860 --wait=true -v=7 --alsologtostderr: (3m16.525210472s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-612860
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (230.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (6.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-612860 node delete m03 -v=7 --alsologtostderr: (5.346439865s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (6.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (22.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-612860 stop -v=7 --alsologtostderr: (22.845868226s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-612860 status -v=7 --alsologtostderr: exit status 7 (88.426203ms)

                                                
                                                
-- stdout --
	ha-612860
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-612860-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-612860-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 21:13:07.936985  132444 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:13:07.937080  132444 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:13:07.937089  132444 out.go:358] Setting ErrFile to fd 2...
	I0920 21:13:07.937094  132444 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:13:07.937264  132444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9514/.minikube/bin
	I0920 21:13:07.937409  132444 out.go:352] Setting JSON to false
	I0920 21:13:07.937436  132444 mustload.go:65] Loading cluster: ha-612860
	I0920 21:13:07.937562  132444 notify.go:220] Checking for updates...
	I0920 21:13:07.937846  132444 config.go:182] Loaded profile config "ha-612860": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 21:13:07.937865  132444 status.go:174] checking status of ha-612860 ...
	I0920 21:13:07.938281  132444 cli_runner.go:164] Run: docker container inspect ha-612860 --format={{.State.Status}}
	I0920 21:13:07.954946  132444 status.go:364] ha-612860 host status = "Stopped" (err=<nil>)
	I0920 21:13:07.954990  132444 status.go:377] host is not running, skipping remaining checks
	I0920 21:13:07.954999  132444 status.go:176] ha-612860 status: &{Name:ha-612860 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 21:13:07.955033  132444 status.go:174] checking status of ha-612860-m02 ...
	I0920 21:13:07.955370  132444 cli_runner.go:164] Run: docker container inspect ha-612860-m02 --format={{.State.Status}}
	I0920 21:13:07.970412  132444 status.go:364] ha-612860-m02 host status = "Stopped" (err=<nil>)
	I0920 21:13:07.970426  132444 status.go:377] host is not running, skipping remaining checks
	I0920 21:13:07.970431  132444 status.go:176] ha-612860-m02 status: &{Name:ha-612860-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 21:13:07.970443  132444 status.go:174] checking status of ha-612860-m04 ...
	I0920 21:13:07.970655  132444 cli_runner.go:164] Run: docker container inspect ha-612860-m04 --format={{.State.Status}}
	I0920 21:13:07.985805  132444 status.go:364] ha-612860-m04 host status = "Stopped" (err=<nil>)
	I0920 21:13:07.985826  132444 status.go:377] host is not running, skipping remaining checks
	I0920 21:13:07.985831  132444 status.go:176] ha-612860-m04 status: &{Name:ha-612860-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (22.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (88.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-612860 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0920 21:14:29.131343   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-612860 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m27.92867601s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (88.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (37.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-612860 --control-plane -v=7 --alsologtostderr
E0920 21:14:56.837060   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-612860 --control-plane -v=7 --alsologtostderr: (36.637415625s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-612860 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (37.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.80s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (19.47s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-870811 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-870811 --driver=docker  --container-runtime=docker: (19.468916743s)
--- PASS: TestImageBuild/serial/Setup (19.47s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.2s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-870811
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-870811: (1.200640078s)
--- PASS: TestImageBuild/serial/NormalBuild (1.20s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.69s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-870811
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.69s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.5s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-870811
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.50s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.52s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-870811
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.61s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-774375 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0920 21:16:18.469873   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-774375 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m9.61253789s)
--- PASS: TestJSONOutput/start/Command (69.61s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-774375 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.39s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-774375 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.39s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-774375 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-774375 --output=json --user=testUser: (10.813529142s)
--- PASS: TestJSONOutput/stop/Command (10.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-729817 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-729817 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (54.80861ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"624a64ac-6254-4223-a9ab-a12f78193d9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-729817] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4189fbc2-3823-459d-ad45-bd0a97ed3689","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19672"}}
	{"specversion":"1.0","id":"33d82465-8f9e-4ca1-9457-707edf5adcf0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ea1b4d37-bce0-4331-8d7b-90a781689143","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19672-9514/kubeconfig"}}
	{"specversion":"1.0","id":"4778e72b-e436-489b-9821-5724b164c026","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9514/.minikube"}}
	{"specversion":"1.0","id":"c742470d-4d75-46e1-bb96-d045fec4362b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7e511291-510f-4369-926c-eb046eb7cc03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ed97b7dc-ab96-4eaa-85b3-d31c3ff4ed5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-729817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-729817
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (22.04s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-603957 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-603957 --network=: (20.10841839s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-603957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-603957
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-603957: (1.910905724s)
--- PASS: TestKicCustomNetwork/create_custom_network (22.04s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.83s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-282936 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-282936 --network=bridge: (20.958022994s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-282936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-282936
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-282936: (1.855105042s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.83s)

                                                
                                    
x
+
TestKicExistingNetwork (21.77s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0920 21:17:55.011666   16274 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0920 21:17:55.026152   16274 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0920 21:17:55.026219   16274 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0920 21:17:55.026236   16274 cli_runner.go:164] Run: docker network inspect existing-network
W0920 21:17:55.040432   16274 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0920 21:17:55.040456   16274 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0920 21:17:55.040470   16274 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0920 21:17:55.040611   16274 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0920 21:17:55.055121   16274 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-38df5b94ae40 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:14:27:00:6a} reservation:<nil>}
I0920 21:17:55.055563   16274 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bee100}
I0920 21:17:55.055590   16274 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0920 21:17:55.055632   16274 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0920 21:17:55.111310   16274 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-325634 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-325634 --network=existing-network: (19.812680429s)
helpers_test.go:175: Cleaning up "existing-network-325634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-325634
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-325634: (1.828465061s)
I0920 21:18:16.767651   16274 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (21.77s)

                                                
                                    
x
+
TestKicCustomSubnet (22.48s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-530744 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-530744 --subnet=192.168.60.0/24: (20.563269086s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-530744 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-530744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-530744
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-530744: (1.9002621s)
--- PASS: TestKicCustomSubnet (22.48s)

                                                
                                    
x
+
TestKicStaticIP (23.61s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-224552 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-224552 --static-ip=192.168.200.200: (21.571175567s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-224552 ip
helpers_test.go:175: Cleaning up "static-ip-224552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-224552
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-224552: (1.925913437s)
--- PASS: TestKicStaticIP (23.61s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (50.98s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-713537 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-713537 --driver=docker  --container-runtime=docker: (22.530547036s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-727380 --driver=docker  --container-runtime=docker
E0920 21:19:29.130984   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-727380 --driver=docker  --container-runtime=docker: (23.381053858s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-713537
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-727380
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-727380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-727380
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-727380: (1.979328591s)
helpers_test.go:175: Cleaning up "first-713537" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-713537
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-713537: (2.030505293s)
--- PASS: TestMinikubeProfile (50.98s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-469744 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-469744 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.234677512s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-469744 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-484244 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-484244 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.937934428s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-484244 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.22s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.4s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-469744 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-469744 --alsologtostderr -v=5: (1.399312235s)
--- PASS: TestMountStart/serial/DeleteFirst (1.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-484244 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.22s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-484244
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-484244: (1.155482532s)
--- PASS: TestMountStart/serial/Stop (1.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.49s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-484244
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-484244: (6.486739034s)
--- PASS: TestMountStart/serial/RestartStopped (7.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-484244 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (56.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-642857 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-642857 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (56.203157509s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (56.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (54.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- rollout status deployment/busybox
E0920 21:21:18.469092   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-642857 -- rollout status deployment/busybox: (2.084739269s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 21:21:20.527580   16274 retry.go:31] will retry after 837.022084ms: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 21:21:21.469451   16274 retry.go:31] will retry after 1.304062078s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 21:21:22.875579   16274 retry.go:31] will retry after 2.967747814s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 21:21:25.945411   16274 retry.go:31] will retry after 5.009336744s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 21:21:31.056998   16274 retry.go:31] will retry after 5.086868065s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 21:21:36.245728   16274 retry.go:31] will retry after 5.768164817s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 21:21:42.119128   16274 retry.go:31] will retry after 7.227348441s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 21:21:49.448715   16274 retry.go:31] will retry after 22.382027232s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- exec busybox-7dff88458-g8n84 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- exec busybox-7dff88458-pphk2 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- exec busybox-7dff88458-g8n84 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- exec busybox-7dff88458-pphk2 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- exec busybox-7dff88458-g8n84 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- exec busybox-7dff88458-pphk2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (54.84s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- exec busybox-7dff88458-g8n84 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- exec busybox-7dff88458-g8n84 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- exec busybox-7dff88458-pphk2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-642857 -- exec busybox-7dff88458-pphk2 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-642857 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-642857 -v 3 --alsologtostderr: (16.12956855s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.71s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-642857 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 cp testdata/cp-test.txt multinode-642857:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 ssh -n multinode-642857 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 cp multinode-642857:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4181333242/001/cp-test_multinode-642857.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 ssh -n multinode-642857 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 cp multinode-642857:/home/docker/cp-test.txt multinode-642857-m02:/home/docker/cp-test_multinode-642857_multinode-642857-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 ssh -n multinode-642857 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 ssh -n multinode-642857-m02 "sudo cat /home/docker/cp-test_multinode-642857_multinode-642857-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 cp multinode-642857:/home/docker/cp-test.txt multinode-642857-m03:/home/docker/cp-test_multinode-642857_multinode-642857-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 ssh -n multinode-642857 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 ssh -n multinode-642857-m03 "sudo cat /home/docker/cp-test_multinode-642857_multinode-642857-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 cp testdata/cp-test.txt multinode-642857-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 ssh -n multinode-642857-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 cp multinode-642857-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4181333242/001/cp-test_multinode-642857-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 ssh -n multinode-642857-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 cp multinode-642857-m02:/home/docker/cp-test.txt multinode-642857:/home/docker/cp-test_multinode-642857-m02_multinode-642857.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 ssh -n multinode-642857-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 ssh -n multinode-642857 "sudo cat /home/docker/cp-test_multinode-642857-m02_multinode-642857.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 cp multinode-642857-m02:/home/docker/cp-test.txt multinode-642857-m03:/home/docker/cp-test_multinode-642857-m02_multinode-642857-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 ssh -n multinode-642857-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 ssh -n multinode-642857-m03 "sudo cat /home/docker/cp-test_multinode-642857-m02_multinode-642857-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 cp testdata/cp-test.txt multinode-642857-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 ssh -n multinode-642857-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 cp multinode-642857-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4181333242/001/cp-test_multinode-642857-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 ssh -n multinode-642857-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 cp multinode-642857-m03:/home/docker/cp-test.txt multinode-642857:/home/docker/cp-test_multinode-642857-m03_multinode-642857.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 ssh -n multinode-642857-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 ssh -n multinode-642857 "sudo cat /home/docker/cp-test_multinode-642857-m03_multinode-642857.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 cp multinode-642857-m03:/home/docker/cp-test.txt multinode-642857-m02:/home/docker/cp-test_multinode-642857-m03_multinode-642857-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 ssh -n multinode-642857-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 ssh -n multinode-642857-m02 "sudo cat /home/docker/cp-test_multinode-642857-m03_multinode-642857-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.38s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-642857 node stop m03: (1.160990249s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-642857 status: exit status 7 (424.466315ms)

                                                
                                                
-- stdout --
	multinode-642857
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-642857-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-642857-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-642857 status --alsologtostderr: exit status 7 (431.815118ms)

                                                
                                                
-- stdout --
	multinode-642857
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-642857-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-642857-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 21:22:41.027527  218840 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:22:41.027771  218840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:22:41.027781  218840 out.go:358] Setting ErrFile to fd 2...
	I0920 21:22:41.027785  218840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:22:41.027936  218840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9514/.minikube/bin
	I0920 21:22:41.028084  218840 out.go:352] Setting JSON to false
	I0920 21:22:41.028114  218840 mustload.go:65] Loading cluster: multinode-642857
	I0920 21:22:41.028164  218840 notify.go:220] Checking for updates...
	I0920 21:22:41.028627  218840 config.go:182] Loaded profile config "multinode-642857": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 21:22:41.028653  218840 status.go:174] checking status of multinode-642857 ...
	I0920 21:22:41.029075  218840 cli_runner.go:164] Run: docker container inspect multinode-642857 --format={{.State.Status}}
	I0920 21:22:41.046376  218840 status.go:364] multinode-642857 host status = "Running" (err=<nil>)
	I0920 21:22:41.046396  218840 host.go:66] Checking if "multinode-642857" exists ...
	I0920 21:22:41.046581  218840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-642857
	I0920 21:22:41.062152  218840 host.go:66] Checking if "multinode-642857" exists ...
	I0920 21:22:41.062413  218840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 21:22:41.062455  218840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-642857
	I0920 21:22:41.078078  218840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/multinode-642857/id_rsa Username:docker}
	I0920 21:22:41.169985  218840 ssh_runner.go:195] Run: systemctl --version
	I0920 21:22:41.173733  218840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:22:41.183947  218840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 21:22:41.228974  218840 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-20 21:22:41.22010125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 21:22:41.229538  218840 kubeconfig.go:125] found "multinode-642857" server: "https://192.168.67.2:8443"
	I0920 21:22:41.229571  218840 api_server.go:166] Checking apiserver status ...
	I0920 21:22:41.229607  218840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 21:22:41.239609  218840 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2368/cgroup
	I0920 21:22:41.247229  218840 api_server.go:182] apiserver freezer: "12:freezer:/docker/69fd9f031ad8eac85b84e75eff386839fbde961b8ece785c78e98885b23b8716/kubepods/burstable/pod1ba844d18916e85b406945dec2182783/7baedcf8d403db96a4da31b0bb1e14d0de67cc098ee91027cb10e004e9640039"
	I0920 21:22:41.247281  218840 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/69fd9f031ad8eac85b84e75eff386839fbde961b8ece785c78e98885b23b8716/kubepods/burstable/pod1ba844d18916e85b406945dec2182783/7baedcf8d403db96a4da31b0bb1e14d0de67cc098ee91027cb10e004e9640039/freezer.state
	I0920 21:22:41.254168  218840 api_server.go:204] freezer state: "THAWED"
	I0920 21:22:41.254192  218840 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0920 21:22:41.258460  218840 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0920 21:22:41.258479  218840 status.go:456] multinode-642857 apiserver status = Running (err=<nil>)
	I0920 21:22:41.258489  218840 status.go:176] multinode-642857 status: &{Name:multinode-642857 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 21:22:41.258509  218840 status.go:174] checking status of multinode-642857-m02 ...
	I0920 21:22:41.258736  218840 cli_runner.go:164] Run: docker container inspect multinode-642857-m02 --format={{.State.Status}}
	I0920 21:22:41.274907  218840 status.go:364] multinode-642857-m02 host status = "Running" (err=<nil>)
	I0920 21:22:41.274927  218840 host.go:66] Checking if "multinode-642857-m02" exists ...
	I0920 21:22:41.275124  218840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-642857-m02
	I0920 21:22:41.290182  218840 host.go:66] Checking if "multinode-642857-m02" exists ...
	I0920 21:22:41.290405  218840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 21:22:41.290458  218840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-642857-m02
	I0920 21:22:41.305330  218840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19672-9514/.minikube/machines/multinode-642857-m02/id_rsa Username:docker}
	I0920 21:22:41.393752  218840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:22:41.403283  218840 status.go:176] multinode-642857-m02 status: &{Name:multinode-642857-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0920 21:22:41.403308  218840 status.go:174] checking status of multinode-642857-m03 ...
	I0920 21:22:41.403524  218840 cli_runner.go:164] Run: docker container inspect multinode-642857-m03 --format={{.State.Status}}
	I0920 21:22:41.419264  218840 status.go:364] multinode-642857-m03 host status = "Stopped" (err=<nil>)
	I0920 21:22:41.419278  218840 status.go:377] host is not running, skipping remaining checks
	I0920 21:22:41.419283  218840 status.go:176] multinode-642857-m03 status: &{Name:multinode-642857-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.02s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 node start m03 -v=7 --alsologtostderr
E0920 21:22:41.534463   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-642857 node start m03 -v=7 --alsologtostderr: (8.899418495s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (96.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-642857
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-642857
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-642857: (22.223685905s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-642857 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-642857 --wait=true -v=8 --alsologtostderr: (1m14.396006233s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-642857
--- PASS: TestMultiNode/serial/RestartKeepsNodes (96.70s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 node delete m03
E0920 21:24:29.130792   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-642857 node delete m03: (4.551811577s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.08s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-642857 stop: (21.287951615s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-642857 status: exit status 7 (92.437965ms)

                                                
                                                
-- stdout --
	multinode-642857
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-642857-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-642857 status --alsologtostderr: exit status 7 (72.158635ms)

                                                
                                                
-- stdout --
	multinode-642857
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-642857-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 21:24:54.120973  234233 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:24:54.121067  234233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:24:54.121074  234233 out.go:358] Setting ErrFile to fd 2...
	I0920 21:24:54.121078  234233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:24:54.121248  234233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9514/.minikube/bin
	I0920 21:24:54.121396  234233 out.go:352] Setting JSON to false
	I0920 21:24:54.121421  234233 mustload.go:65] Loading cluster: multinode-642857
	I0920 21:24:54.121468  234233 notify.go:220] Checking for updates...
	I0920 21:24:54.121799  234233 config.go:182] Loaded profile config "multinode-642857": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 21:24:54.121817  234233 status.go:174] checking status of multinode-642857 ...
	I0920 21:24:54.122296  234233 cli_runner.go:164] Run: docker container inspect multinode-642857 --format={{.State.Status}}
	I0920 21:24:54.138287  234233 status.go:364] multinode-642857 host status = "Stopped" (err=<nil>)
	I0920 21:24:54.138304  234233 status.go:377] host is not running, skipping remaining checks
	I0920 21:24:54.138309  234233 status.go:176] multinode-642857 status: &{Name:multinode-642857 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 21:24:54.138348  234233 status.go:174] checking status of multinode-642857-m02 ...
	I0920 21:24:54.138557  234233 cli_runner.go:164] Run: docker container inspect multinode-642857-m02 --format={{.State.Status}}
	I0920 21:24:54.154001  234233 status.go:364] multinode-642857-m02 host status = "Stopped" (err=<nil>)
	I0920 21:24:54.154021  234233 status.go:377] host is not running, skipping remaining checks
	I0920 21:24:54.154029  234233 status.go:176] multinode-642857-m02 status: &{Name:multinode-642857-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-642857 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-642857 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (50.676415441s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-642857 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.20s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-642857
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-642857-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-642857-m02 --driver=docker  --container-runtime=docker: exit status 14 (56.051324ms)

                                                
                                                
-- stdout --
	* [multinode-642857-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9514/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9514/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-642857-m02' is duplicated with machine name 'multinode-642857-m02' in profile 'multinode-642857'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-642857-m03 --driver=docker  --container-runtime=docker
E0920 21:25:52.201392   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-642857-m03 --driver=docker  --container-runtime=docker: (20.674745221s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-642857
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-642857: exit status 80 (251.009061ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-642857 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-642857-m03 already exists in multinode-642857-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-642857-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-642857-m03: (1.980781365s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.00s)

                                                
                                    
x
+
TestPreload (94.27s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-307268 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0920 21:26:18.468643   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-307268 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (50.320922948s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-307268 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-307268 image pull gcr.io/k8s-minikube/busybox: (1.354131959s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-307268
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-307268: (10.622227652s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-307268 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-307268 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (29.681705233s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-307268 image list
helpers_test.go:175: Cleaning up "test-preload-307268" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-307268
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-307268: (2.097050485s)
--- PASS: TestPreload (94.27s)

                                                
                                    
x
+
TestScheduledStopUnix (93.7s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-205971 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-205971 --memory=2048 --driver=docker  --container-runtime=docker: (20.955504954s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-205971 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-205971 -n scheduled-stop-205971
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-205971 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0920 21:28:07.593016   16274 retry.go:31] will retry after 143.098µs: open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/scheduled-stop-205971/pid: no such file or directory
I0920 21:28:07.594207   16274 retry.go:31] will retry after 156.393µs: open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/scheduled-stop-205971/pid: no such file or directory
I0920 21:28:07.595347   16274 retry.go:31] will retry after 299.353µs: open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/scheduled-stop-205971/pid: no such file or directory
I0920 21:28:07.596487   16274 retry.go:31] will retry after 359.779µs: open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/scheduled-stop-205971/pid: no such file or directory
I0920 21:28:07.597584   16274 retry.go:31] will retry after 465.477µs: open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/scheduled-stop-205971/pid: no such file or directory
I0920 21:28:07.598735   16274 retry.go:31] will retry after 842.706µs: open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/scheduled-stop-205971/pid: no such file or directory
I0920 21:28:07.599881   16274 retry.go:31] will retry after 1.021403ms: open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/scheduled-stop-205971/pid: no such file or directory
I0920 21:28:07.600999   16274 retry.go:31] will retry after 2.064823ms: open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/scheduled-stop-205971/pid: no such file or directory
I0920 21:28:07.603124   16274 retry.go:31] will retry after 2.586948ms: open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/scheduled-stop-205971/pid: no such file or directory
I0920 21:28:07.606319   16274 retry.go:31] will retry after 4.289069ms: open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/scheduled-stop-205971/pid: no such file or directory
I0920 21:28:07.611508   16274 retry.go:31] will retry after 8.596342ms: open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/scheduled-stop-205971/pid: no such file or directory
I0920 21:28:07.620742   16274 retry.go:31] will retry after 11.007643ms: open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/scheduled-stop-205971/pid: no such file or directory
I0920 21:28:07.631935   16274 retry.go:31] will retry after 7.280699ms: open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/scheduled-stop-205971/pid: no such file or directory
I0920 21:28:07.640125   16274 retry.go:31] will retry after 27.735191ms: open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/scheduled-stop-205971/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-205971 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-205971 -n scheduled-stop-205971
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-205971
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-205971 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-205971
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-205971: exit status 7 (57.348623ms)

                                                
                                                
-- stdout --
	scheduled-stop-205971
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-205971 -n scheduled-stop-205971
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-205971 -n scheduled-stop-205971: exit status 7 (57.320977ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-205971" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-205971
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-205971: (1.559909233s)
--- PASS: TestScheduledStopUnix (93.70s)

                                                
                                    
x
+
TestSkaffold (96.64s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe544897368 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-544842 --memory=2600 --driver=docker  --container-runtime=docker
E0920 21:29:29.130432   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-544842 --memory=2600 --driver=docker  --container-runtime=docker: (23.519501314s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe544897368 run --minikube-profile skaffold-544842 --kube-context skaffold-544842 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe544897368 run --minikube-profile skaffold-544842 --kube-context skaffold-544842 --status-check=true --port-forward=false --interactive=false: (58.809842411s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-76fb4b98d7-nd94q" [92fb8b12-7a32-498f-9761-9820c70fcd06] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.002564299s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6748cbcdd9-679zg" [457555fd-4496-47d0-810b-0023b10784c9] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003763482s
helpers_test.go:175: Cleaning up "skaffold-544842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-544842
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-544842: (2.663752175s)
--- PASS: TestSkaffold (96.64s)

                                                
                                    
x
+
TestInsufficientStorage (12.56s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-099898 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-099898 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.502080679s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d8b162ba-e580-40b0-8666-59ff7dadf9f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-099898] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2c8e22d0-7158-4262-9161-08de808dc819","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19672"}}
	{"specversion":"1.0","id":"0972df76-39c9-44c1-99a5-e0072177a1ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d9201756-45f9-4e33-af0e-0e5ea8feb784","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19672-9514/kubeconfig"}}
	{"specversion":"1.0","id":"9507f7bf-dd0a-4895-ac42-2e5adb7688a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9514/.minikube"}}
	{"specversion":"1.0","id":"5a79196f-eb48-4902-9a93-94ad9341a36b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"292a4ba8-f8ce-4364-961e-58636b6bdc16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7f9f253b-6cc8-482c-a2d8-668bd105c582","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"debad184-0497-4938-8436-890dcbf98377","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"cd24b519-9da0-42e3-9a07-303967052fb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae724d12-961b-4d02-b6e2-3144ec361af5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ecddfe19-12bd-4490-939b-e8380864dd28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-099898\" primary control-plane node in \"insufficient-storage-099898\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"edf2d3e0-eb9a-4e41-9ceb-aaea13defb44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726784731-19672 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"17c61264-0e83-4f18-8194-8b444c951aba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d124d764-2026-44e5-a2fc-560c3d6c735e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-099898 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-099898 --output=json --layout=cluster: exit status 7 (235.57397ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-099898","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-099898","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 21:31:07.347669  274269 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-099898" does not appear in /home/jenkins/minikube-integration/19672-9514/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-099898 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-099898 --output=json --layout=cluster: exit status 7 (238.064471ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-099898","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-099898","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 21:31:07.585815  274369 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-099898" does not appear in /home/jenkins/minikube-integration/19672-9514/kubeconfig
	E0920 21:31:07.595197  274369 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/insufficient-storage-099898/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-099898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-099898
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-099898: (1.580019334s)
--- PASS: TestInsufficientStorage (12.56s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (100.25s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.952653636 start -p running-upgrade-044131 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0920 21:31:18.468736   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.952653636 start -p running-upgrade-044131 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m16.649169525s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-044131 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-044131 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (21.012855456s)
helpers_test.go:175: Cleaning up "running-upgrade-044131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-044131
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-044131: (2.027081788s)
--- PASS: TestRunningBinaryUpgrade (100.25s)

                                                
                                    
x
+
TestKubernetesUpgrade (342.77s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-925923 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-925923 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (43.764910837s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-925923
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-925923: (10.873103766s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-925923 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-925923 status --format={{.Host}}: exit status 7 (106.874002ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-925923 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-925923 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m26.366676516s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-925923 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-925923 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-925923 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (59.259221ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-925923] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9514/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9514/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-925923
	    minikube start -p kubernetes-upgrade-925923 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9259232 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-925923 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-925923 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-925923 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (18.969827671s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-925923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-925923
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-925923: (2.579990815s)
--- PASS: TestKubernetesUpgrade (342.77s)

                                                
                                    
x
+
TestMissingContainerUpgrade (137.27s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.141776524 start -p missing-upgrade-191889 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.141776524 start -p missing-upgrade-191889 --memory=2200 --driver=docker  --container-runtime=docker: (1m18.083551687s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-191889
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-191889: (10.589322743s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-191889
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-191889 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-191889 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (45.970143777s)
helpers_test.go:175: Cleaning up "missing-upgrade-191889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-191889
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-191889: (2.14543883s)
--- PASS: TestMissingContainerUpgrade (137.27s)

                                                
                                    
x
+
TestPause/serial/Start (65.24s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-756344 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-756344 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m5.236145172s)
--- PASS: TestPause/serial/Start (65.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-503362 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-503362 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (60.394933ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-503362] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9514/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9514/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (30.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-503362 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-503362 --driver=docker  --container-runtime=docker: (30.390317077s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-503362 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (30.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-503362 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-503362 --no-kubernetes --driver=docker  --container-runtime=docker: (4.823112414s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-503362 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-503362 status -o json: exit status 2 (253.393554ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-503362","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-503362
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-503362: (1.662124855s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-503362 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-503362 --no-kubernetes --driver=docker  --container-runtime=docker: (11.064842863s)
--- PASS: TestNoKubernetes/serial/Start (11.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (61.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4131836887 start -p stopped-upgrade-192213 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4131836887 start -p stopped-upgrade-192213 --memory=2200 --vm-driver=docker  --container-runtime=docker: (25.252935977s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4131836887 -p stopped-upgrade-192213 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4131836887 -p stopped-upgrade-192213 stop: (10.851052833s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-192213 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-192213 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (25.466233757s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (61.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-503362 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-503362 "sudo systemctl is-active --quiet service kubelet": exit status 1 (249.832448ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (52.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (29.560660152s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (22.554089612s)
--- PASS: TestNoKubernetes/serial/ProfileList (52.12s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.9s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-756344 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-756344 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (30.885892654s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.90s)

                                                
                                    
x
+
TestPause/serial/Pause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-756344 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-756344 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-756344 --output=json --layout=cluster: exit status 2 (304.345111ms)

                                                
                                                
-- stdout --
	{"Name":"pause-756344","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-756344","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.44s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-756344 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.44s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.63s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-756344 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.63s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.06s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-756344 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-756344 --alsologtostderr -v=5: (2.062706167s)
--- PASS: TestPause/serial/DeletePaused (2.06s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (13.87s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0920 21:34:29.131093   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (13.81664836s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-756344
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-756344: exit status 1 (15.768708ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-756344: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (13.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-503362
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-503362: (1.17205154s)
--- PASS: TestNoKubernetes/serial/Stop (1.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-503362 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-503362 --driver=docker  --container-runtime=docker: (8.238925604s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-192213
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-192213: (1.115886057s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-503362 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-503362 "sudo systemctl is-active --quiet service kubelet": exit status 1 (262.918122ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (104.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-854870 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-854870 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (1m44.307014604s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (104.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (41.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-618718 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 21:35:42.939242   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/skaffold-544842/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:35:42.945579   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/skaffold-544842/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:35:42.956939   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/skaffold-544842/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:35:42.978262   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/skaffold-544842/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:35:43.019600   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/skaffold-544842/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:35:43.100897   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/skaffold-544842/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:35:43.263040   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/skaffold-544842/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:35:43.585029   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/skaffold-544842/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:35:44.226599   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/skaffold-544842/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:35:45.507906   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/skaffold-544842/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:35:48.069199   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/skaffold-544842/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:35:53.191427   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/skaffold-544842/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:36:03.433231   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/skaffold-544842/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-618718 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (41.974036815s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (41.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-618718 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [181043ae-8995-4ba8-a4b5-99b2c398fbdd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [181043ae-8995-4ba8-a4b5-99b2c398fbdd] Running
E0920 21:36:18.468983   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003704044s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-618718 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-618718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-618718 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-618718 --alsologtostderr -v=3
E0920 21:36:23.915456   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/skaffold-544842/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-618718 --alsologtostderr -v=3: (10.723258685s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-618718 -n no-preload-618718
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-618718 -n no-preload-618718: exit status 7 (121.683547ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-618718 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (297.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-618718 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-618718 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m57.025431123s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-618718 -n no-preload-618718
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (297.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (34.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-238618 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 21:37:04.877241   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/skaffold-544842/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-238618 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (34.38012007s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (34.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-854870 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b2b8c2ca-3252-41d4-be79-4c119cee7940] Pending
helpers_test.go:344: "busybox" [b2b8c2ca-3252-41d4-be79-4c119cee7940] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b2b8c2ca-3252-41d4-be79-4c119cee7940] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004421157s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-854870 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-854870 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-854870 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-854870 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-854870 --alsologtostderr -v=3: (10.797267881s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-238618 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7cb8fd68-97cc-4544-a2d4-1a2251ab0a62] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7cb8fd68-97cc-4544-a2d4-1a2251ab0a62] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003398094s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-238618 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-238618 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-238618 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-854870 -n old-k8s-version-854870
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-854870 -n old-k8s-version-854870: exit status 7 (67.546404ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-854870 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (140.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-854870 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-854870 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m19.996952906s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-854870 -n old-k8s-version-854870
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (140.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-238618 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-238618 --alsologtostderr -v=3: (11.518168525s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-238618 -n embed-certs-238618
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-238618 -n embed-certs-238618: exit status 7 (94.912515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-238618 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (299.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-238618 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-238618 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m59.562209006s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-238618 -n embed-certs-238618
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (299.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (29.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-293651 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 21:38:26.799510   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/skaffold-544842/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-293651 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (29.790785916s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (29.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-293651 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-293651 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-293651 --alsologtostderr -v=3: (10.719804433s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-293651 -n newest-cni-293651
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-293651 -n newest-cni-293651: exit status 7 (79.664882ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-293651 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-293651 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-293651 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (13.699713988s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-293651 -n newest-cni-293651
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-293651 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-293651 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-293651 -n newest-cni-293651
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-293651 -n newest-cni-293651: exit status 2 (275.11705ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-293651 -n newest-cni-293651
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-293651 -n newest-cni-293651: exit status 2 (272.598271ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-293651 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-293651 -n newest-cni-293651
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-293651 -n newest-cni-293651
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-867967 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 21:39:29.131356   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/functional-284790/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-867967 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m1.540644445s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-wtrs6" [2ebc9b94-cbb1-4ffb-a148-47e88bdfd030] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003618696s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-wtrs6" [2ebc9b94-cbb1-4ffb-a148-47e88bdfd030] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003353303s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-854870 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-854870 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-854870 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-854870 -n old-k8s-version-854870
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-854870 -n old-k8s-version-854870: exit status 2 (270.894513ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-854870 -n old-k8s-version-854870
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-854870 -n old-k8s-version-854870: exit status 2 (277.220981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-854870 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-854870 -n old-k8s-version-854870
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-854870 -n old-k8s-version-854870
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (34.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-125174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-125174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (34.835285165s)
--- PASS: TestNetworkPlugins/group/auto/Start (34.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-867967 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dc02f734-ce51-4fa3-93d6-6c25f50decd0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dc02f734-ce51-4fa3-93d6-6c25f50decd0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004367038s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-867967 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-867967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-867967 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-867967 --alsologtostderr -v=3
E0920 21:40:42.938687   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/skaffold-544842/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-867967 --alsologtostderr -v=3: (10.872851843s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-125174 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-867967 -n default-k8s-diff-port-867967
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-867967 -n default-k8s-diff-port-867967: exit status 7 (96.153272ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-867967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-867967 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
I0920 21:40:46.142138   16274 config.go:182] Loaded profile config "auto-125174": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-867967 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m22.662927521s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-867967 -n default-k8s-diff-port-867967
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-125174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5nh9j" [1ac309ae-1914-46be-af4c-bbfe2a3cc002] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5nh9j" [1ac309ae-1914-46be-af4c-bbfe2a3cc002] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003894468s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-125174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-125174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-125174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (55.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-125174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0920 21:41:18.469243   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/addons-135472/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-125174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (55.642786349s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (55.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vlxkh" [cba383c2-b7f7-4915-bade-7d6c9cd2d58c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003393134s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vlxkh" [cba383c2-b7f7-4915-bade-7d6c9cd2d58c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002809814s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-618718 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-618718 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-618718 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-618718 -n no-preload-618718
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-618718 -n no-preload-618718: exit status 2 (306.881806ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-618718 -n no-preload-618718
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-618718 -n no-preload-618718: exit status 2 (302.357693ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-618718 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-618718 -n no-preload-618718
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-618718 -n no-preload-618718
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (52.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-125174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-125174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (52.034906455s)
--- PASS: TestNetworkPlugins/group/calico/Start (52.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-v8hqv" [6e188e5f-ccb8-4114-9102-4d1799fd671b] Running
E0920 21:42:14.896229   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/old-k8s-version-854870/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:42:14.902676   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/old-k8s-version-854870/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:42:14.913988   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/old-k8s-version-854870/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:42:14.935297   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/old-k8s-version-854870/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:42:14.976691   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/old-k8s-version-854870/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:42:15.058165   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/old-k8s-version-854870/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:42:15.219663   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/old-k8s-version-854870/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:42:15.541025   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/old-k8s-version-854870/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00366978s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-125174 "pgrep -a kubelet"
E0920 21:42:16.182720   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/old-k8s-version-854870/client.crt: no such file or directory" logger="UnhandledError"
I0920 21:42:16.471307   16274 config.go:182] Loaded profile config "kindnet-125174": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-125174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9vxnq" [33bf2303-2cc4-462b-8b97-587cb60ddaa4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 21:42:17.464565   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/old-k8s-version-854870/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:42:20.026770   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/old-k8s-version-854870/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-9vxnq" [33bf2303-2cc4-462b-8b97-587cb60ddaa4] Running
E0920 21:42:25.148462   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/old-k8s-version-854870/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003306239s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-125174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-125174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-125174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-29xxb" [8b9cc3c6-c68e-4e99-a5c1-c2115b391c4f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005106801s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-125174 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (42.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-125174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
I0920 21:42:45.560631   16274 config.go:182] Loaded profile config "calico-125174": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-125174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (42.775899538s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (42.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-125174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hvpfn" [695c02e1-9dbe-41bf-b539-7bed53d5e513] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hvpfn" [695c02e1-9dbe-41bf-b539-7bed53d5e513] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003491696s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-t4w8w" [4b3ce097-b373-4731-80f2-a7230b7a1a34] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00960319s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-t4w8w" [4b3ce097-b373-4731-80f2-a7230b7a1a34] Running
E0920 21:42:55.871786   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/old-k8s-version-854870/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004132429s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-238618 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-125174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-125174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-125174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-238618 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-238618 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-238618 -n embed-certs-238618
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-238618 -n embed-certs-238618: exit status 2 (294.636344ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-238618 -n embed-certs-238618
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-238618 -n embed-certs-238618: exit status 2 (278.527843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-238618 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-238618 -n embed-certs-238618
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-238618 -n embed-certs-238618
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (65.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-125174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-125174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m5.357416538s)
--- PASS: TestNetworkPlugins/group/false/Start (65.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-125174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-125174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m10.962355193s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-125174 "pgrep -a kubelet"
I0920 21:43:28.429816   16274 config.go:182] Loaded profile config "custom-flannel-125174": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-125174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-znhpn" [9ab32fc9-a60c-4c47-b444-b72f9b820b32] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-znhpn" [9ab32fc9-a60c-4c47-b444-b72f9b820b32] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.003777283s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-125174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-125174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0920 21:43:36.833944   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/old-k8s-version-854870/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-125174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (45.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-125174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-125174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (45.393251577s)
--- PASS: TestNetworkPlugins/group/flannel/Start (45.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-125174 "pgrep -a kubelet"
I0920 21:44:08.922067   16274 config.go:182] Loaded profile config "false-125174": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-125174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vm6bl" [d58a77e0-398c-4d25-9553-812607378465] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vm6bl" [d58a77e0-398c-4d25-9553-812607378465] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.004559175s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-125174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-125174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-125174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-125174 "pgrep -a kubelet"
I0920 21:44:29.867229   16274 config.go:182] Loaded profile config "enable-default-cni-125174": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-125174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tbt2l" [dee59fd2-699e-4f84-8059-34ebe2938dbf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tbt2l" [dee59fd2-699e-4f84-8059-34ebe2938dbf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003165815s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (70.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-125174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-125174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m10.158065843s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (70.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-125174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-125174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-125174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-tx5sb" [8bafc038-d6b4-4ecc-8afa-56598037d44b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004244989s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-125174 "pgrep -a kubelet"
I0920 21:44:47.332042   16274 config.go:182] Loaded profile config "flannel-125174": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-125174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7xntz" [d127c768-dc08-4cc9-859e-ee72e28ceba1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7xntz" [d127c768-dc08-4cc9-859e-ee72e28ceba1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004676806s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-125174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-125174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-125174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (68.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-125174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0920 21:44:58.755776   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/old-k8s-version-854870/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-125174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m8.646804673s)
--- PASS: TestNetworkPlugins/group/bridge/Start (68.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qw242" [419f97f0-33ea-4a02-a550-995adad02744] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004504546s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qw242" [419f97f0-33ea-4a02-a550-995adad02744] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003528611s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-867967 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-867967 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-867967 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-867967 -n default-k8s-diff-port-867967
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-867967 -n default-k8s-diff-port-867967: exit status 2 (272.00068ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-867967 -n default-k8s-diff-port-867967
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-867967 -n default-k8s-diff-port-867967: exit status 2 (268.250464ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-867967 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-867967 -n default-k8s-diff-port-867967
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-867967 -n default-k8s-diff-port-867967
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.42s)
E0920 21:45:25.254453   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/default-k8s-diff-port-867967/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:45:25.335875   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/default-k8s-diff-port-867967/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:45:25.497388   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/default-k8s-diff-port-867967/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:45:25.818826   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/default-k8s-diff-port-867967/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:45:26.460104   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/default-k8s-diff-port-867967/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:45:27.741606   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/default-k8s-diff-port-867967/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:45:30.303721   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/default-k8s-diff-port-867967/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:45:35.425795   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/default-k8s-diff-port-867967/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:45:42.939592   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/skaffold-544842/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:45:45.667600   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/default-k8s-diff-port-867967/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:45:46.334423   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/auto-125174/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:45:46.340817   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/auto-125174/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:45:46.352235   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/auto-125174/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:45:46.373624   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/auto-125174/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:45:46.414989   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/auto-125174/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:45:46.496352   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/auto-125174/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:45:46.658206   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/auto-125174/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:45:46.979861   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/auto-125174/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:45:47.621851   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/auto-125174/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-125174 "pgrep -a kubelet"
I0920 21:45:48.140891   16274 config.go:182] Loaded profile config "kubenet-125174": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-125174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-89hl7" [f036aaa7-24bd-42c7-9832-29ce04e86c96] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 21:45:48.903273   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/auto-125174/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-89hl7" [f036aaa7-24bd-42c7-9832-29ce04e86c96] Running
E0920 21:45:51.465363   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/auto-125174/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:45:56.586679   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/auto-125174/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.003216673s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-125174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-125174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-125174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-125174 "pgrep -a kubelet"
I0920 21:46:07.620072   16274 config.go:182] Loaded profile config "bridge-125174": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-125174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ct65r" [c556fb81-b482-4b42-8ab7-5c3e22bdd9e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ct65r" [c556fb81-b482-4b42-8ab7-5c3e22bdd9e3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003567744s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-125174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-125174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-125174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0920 21:46:18.078024   16274 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/no-preload-618718/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    

Test skip (20/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-459407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-459407
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-125174 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-125174

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-125174

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-125174

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-125174

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-125174

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-125174

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-125174

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-125174

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-125174

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-125174

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-125174

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-125174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-125174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-125174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-125174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-125174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-125174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-125174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-125174" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-125174

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-125174

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-125174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-125174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-125174

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-125174

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-125174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-125174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-125174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-125174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-125174" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19672-9514/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 20 Sep 2024 21:32:16 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-925923
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19672-9514/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 20 Sep 2024 21:33:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-756344
contexts:
- context:
cluster: kubernetes-upgrade-925923
user: kubernetes-upgrade-925923
name: kubernetes-upgrade-925923
- context:
cluster: pause-756344
extensions:
- extension:
last-update: Fri, 20 Sep 2024 21:33:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-756344
name: pause-756344
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-925923
user:
client-certificate: /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/kubernetes-upgrade-925923/client.crt
client-key: /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/kubernetes-upgrade-925923/client.key
- name: pause-756344
user:
client-certificate: /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/pause-756344/client.crt
client-key: /home/jenkins/minikube-integration/19672-9514/.minikube/profiles/pause-756344/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-125174

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-125174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125174"

                                                
                                                
----------------------- debugLogs end: cilium-125174 [took: 3.104042902s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-125174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-125174
--- SKIP: TestNetworkPlugins/group/cilium (3.23s)

                                                
                                    
Copied to clipboard